Test Report: KVM_Linux_crio 19344

                    
                      a7cb5fa386cf1d53f99b10a4cfa08a192cef42dd:2024-07-29:35558
                    
                

Test fail (12/215)

x
+
TestAddons/Setup (2400.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-416933 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p addons-416933 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: signal: killed (39m59.957284594s)

                                                
                                                
-- stdout --
	* [addons-416933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "addons-416933" primary control-plane node in "addons-416933" cluster
	* Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image ghcr.io/helm/tiller:v2.17.0
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	  - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	  - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	  - Using image docker.io/marcnuri/yakd:0.0.5
	  - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	  - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	  - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	  - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	  - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	  - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	  - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	  - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	  - Using image docker.io/registry:2.8.3
	  - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	  - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	  - Using image docker.io/busybox:stable
	  - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	* To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-416933 service yakd-dashboard -n yakd-dashboard
	
	* Verifying ingress addon...
	* Verifying registry addon...
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	  - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	* Verifying gcp-auth addon...
	* Your GCP credentials will now be mounted into every pod created in the addons-416933 cluster.
	* If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	* If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	* Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 19:24:56.969537  742115 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:24:56.970070  742115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:56.970091  742115 out.go:304] Setting ErrFile to fd 2...
	I0729 19:24:56.970098  742115 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:56.970557  742115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 19:24:56.971817  742115 out.go:298] Setting JSON to false
	I0729 19:24:56.973061  742115 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11244,"bootTime":1722269853,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:24:56.973143  742115 start.go:139] virtualization: kvm guest
	I0729 19:24:56.975046  742115 out.go:177] * [addons-416933] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:24:56.976595  742115 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 19:24:56.976658  742115 notify.go:220] Checking for updates...
	I0729 19:24:56.978812  742115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:24:56.979946  742115 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 19:24:56.981281  742115 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:24:56.982442  742115 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 19:24:56.983633  742115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 19:24:56.985052  742115 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:24:57.017281  742115 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 19:24:57.018471  742115 start.go:297] selected driver: kvm2
	I0729 19:24:57.018489  742115 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:24:57.018501  742115 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 19:24:57.019195  742115 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:57.019264  742115 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:24:57.035413  742115 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:24:57.035470  742115 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:24:57.035686  742115 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 19:24:57.035745  742115 cni.go:84] Creating CNI manager for ""
	I0729 19:24:57.035758  742115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:24:57.035765  742115 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:24:57.035837  742115 start.go:340] cluster config:
	{Name:addons-416933 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-416933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:24:57.035942  742115 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:57.037497  742115 out.go:177] * Starting "addons-416933" primary control-plane node in "addons-416933" cluster
	I0729 19:24:57.038646  742115 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:24:57.038691  742115 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:24:57.038702  742115 cache.go:56] Caching tarball of preloaded images
	I0729 19:24:57.038780  742115 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 19:24:57.038791  742115 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 19:24:57.039135  742115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/config.json ...
	I0729 19:24:57.039170  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/config.json: {Name:mk8f534da32dd2d4a535947bf9d2c0134f4d8fe4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:24:57.039304  742115 start.go:360] acquireMachinesLock for addons-416933: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 19:24:57.039347  742115 start.go:364] duration metric: took 29.884µs to acquireMachinesLock for "addons-416933"
	I0729 19:24:57.039364  742115 start.go:93] Provisioning new machine with config: &{Name:addons-416933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:addons-416933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:24:57.039426  742115 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 19:24:57.040735  742115 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0729 19:24:57.040860  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:24:57.040898  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:24:57.055630  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45877
	I0729 19:24:57.056093  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:24:57.056902  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:24:57.056932  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:24:57.057319  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:24:57.057590  742115 main.go:141] libmachine: (addons-416933) Calling .GetMachineName
	I0729 19:24:57.057756  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:24:57.057941  742115 start.go:159] libmachine.API.Create for "addons-416933" (driver="kvm2")
	I0729 19:24:57.057976  742115 client.go:168] LocalClient.Create starting
	I0729 19:24:57.058012  742115 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 19:24:57.254554  742115 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 19:24:57.417901  742115 main.go:141] libmachine: Running pre-create checks...
	I0729 19:24:57.417926  742115 main.go:141] libmachine: (addons-416933) Calling .PreCreateCheck
	I0729 19:24:57.418538  742115 main.go:141] libmachine: (addons-416933) Calling .GetConfigRaw
	I0729 19:24:57.419050  742115 main.go:141] libmachine: Creating machine...
	I0729 19:24:57.419067  742115 main.go:141] libmachine: (addons-416933) Calling .Create
	I0729 19:24:57.419251  742115 main.go:141] libmachine: (addons-416933) Creating KVM machine...
	I0729 19:24:57.420544  742115 main.go:141] libmachine: (addons-416933) DBG | found existing default KVM network
	I0729 19:24:57.421484  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:57.421245  742138 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015c30}
	I0729 19:24:57.421514  742115 main.go:141] libmachine: (addons-416933) DBG | created network xml: 
	I0729 19:24:57.421525  742115 main.go:141] libmachine: (addons-416933) DBG | <network>
	I0729 19:24:57.421604  742115 main.go:141] libmachine: (addons-416933) DBG |   <name>mk-addons-416933</name>
	I0729 19:24:57.421639  742115 main.go:141] libmachine: (addons-416933) DBG |   <dns enable='no'/>
	I0729 19:24:57.421650  742115 main.go:141] libmachine: (addons-416933) DBG |   
	I0729 19:24:57.421657  742115 main.go:141] libmachine: (addons-416933) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 19:24:57.421669  742115 main.go:141] libmachine: (addons-416933) DBG |     <dhcp>
	I0729 19:24:57.421675  742115 main.go:141] libmachine: (addons-416933) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 19:24:57.421682  742115 main.go:141] libmachine: (addons-416933) DBG |     </dhcp>
	I0729 19:24:57.421687  742115 main.go:141] libmachine: (addons-416933) DBG |   </ip>
	I0729 19:24:57.421697  742115 main.go:141] libmachine: (addons-416933) DBG |   
	I0729 19:24:57.421707  742115 main.go:141] libmachine: (addons-416933) DBG | </network>
	I0729 19:24:57.421734  742115 main.go:141] libmachine: (addons-416933) DBG | 
	I0729 19:24:57.426788  742115 main.go:141] libmachine: (addons-416933) DBG | trying to create private KVM network mk-addons-416933 192.168.39.0/24...
	I0729 19:24:57.494300  742115 main.go:141] libmachine: (addons-416933) DBG | private KVM network mk-addons-416933 192.168.39.0/24 created
	I0729 19:24:57.494338  742115 main.go:141] libmachine: (addons-416933) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933 ...
	I0729 19:24:57.494353  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:57.494226  742138 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:24:57.494377  742115 main.go:141] libmachine: (addons-416933) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 19:24:57.494467  742115 main.go:141] libmachine: (addons-416933) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 19:24:57.795535  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:57.795358  742138 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa...
	I0729 19:24:58.130124  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:58.129930  742138 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/addons-416933.rawdisk...
	I0729 19:24:58.130179  742115 main.go:141] libmachine: (addons-416933) DBG | Writing magic tar header
	I0729 19:24:58.130237  742115 main.go:141] libmachine: (addons-416933) DBG | Writing SSH key tar header
	I0729 19:24:58.130281  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:58.130097  742138 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933 ...
	I0729 19:24:58.130308  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933
	I0729 19:24:58.130330  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 19:24:58.130356  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933 (perms=drwx------)
	I0729 19:24:58.130368  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 19:24:58.130378  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:24:58.130387  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 19:24:58.130426  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 19:24:58.130449  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 19:24:58.130460  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 19:24:58.130467  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home/jenkins
	I0729 19:24:58.130473  742115 main.go:141] libmachine: (addons-416933) DBG | Checking permissions on dir: /home
	I0729 19:24:58.130481  742115 main.go:141] libmachine: (addons-416933) DBG | Skipping /home - not owner
	I0729 19:24:58.130491  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 19:24:58.130498  742115 main.go:141] libmachine: (addons-416933) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 19:24:58.130503  742115 main.go:141] libmachine: (addons-416933) Creating domain...
	I0729 19:24:58.131612  742115 main.go:141] libmachine: (addons-416933) define libvirt domain using xml: 
	I0729 19:24:58.131631  742115 main.go:141] libmachine: (addons-416933) <domain type='kvm'>
	I0729 19:24:58.131642  742115 main.go:141] libmachine: (addons-416933)   <name>addons-416933</name>
	I0729 19:24:58.131653  742115 main.go:141] libmachine: (addons-416933)   <memory unit='MiB'>4000</memory>
	I0729 19:24:58.131661  742115 main.go:141] libmachine: (addons-416933)   <vcpu>2</vcpu>
	I0729 19:24:58.131669  742115 main.go:141] libmachine: (addons-416933)   <features>
	I0729 19:24:58.131678  742115 main.go:141] libmachine: (addons-416933)     <acpi/>
	I0729 19:24:58.131686  742115 main.go:141] libmachine: (addons-416933)     <apic/>
	I0729 19:24:58.131692  742115 main.go:141] libmachine: (addons-416933)     <pae/>
	I0729 19:24:58.131697  742115 main.go:141] libmachine: (addons-416933)     
	I0729 19:24:58.131702  742115 main.go:141] libmachine: (addons-416933)   </features>
	I0729 19:24:58.131719  742115 main.go:141] libmachine: (addons-416933)   <cpu mode='host-passthrough'>
	I0729 19:24:58.131737  742115 main.go:141] libmachine: (addons-416933)   
	I0729 19:24:58.131762  742115 main.go:141] libmachine: (addons-416933)   </cpu>
	I0729 19:24:58.131770  742115 main.go:141] libmachine: (addons-416933)   <os>
	I0729 19:24:58.131774  742115 main.go:141] libmachine: (addons-416933)     <type>hvm</type>
	I0729 19:24:58.131808  742115 main.go:141] libmachine: (addons-416933)     <boot dev='cdrom'/>
	I0729 19:24:58.131826  742115 main.go:141] libmachine: (addons-416933)     <boot dev='hd'/>
	I0729 19:24:58.131833  742115 main.go:141] libmachine: (addons-416933)     <bootmenu enable='no'/>
	I0729 19:24:58.131840  742115 main.go:141] libmachine: (addons-416933)   </os>
	I0729 19:24:58.131848  742115 main.go:141] libmachine: (addons-416933)   <devices>
	I0729 19:24:58.131854  742115 main.go:141] libmachine: (addons-416933)     <disk type='file' device='cdrom'>
	I0729 19:24:58.131865  742115 main.go:141] libmachine: (addons-416933)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/boot2docker.iso'/>
	I0729 19:24:58.131871  742115 main.go:141] libmachine: (addons-416933)       <target dev='hdc' bus='scsi'/>
	I0729 19:24:58.131876  742115 main.go:141] libmachine: (addons-416933)       <readonly/>
	I0729 19:24:58.131880  742115 main.go:141] libmachine: (addons-416933)     </disk>
	I0729 19:24:58.131886  742115 main.go:141] libmachine: (addons-416933)     <disk type='file' device='disk'>
	I0729 19:24:58.131894  742115 main.go:141] libmachine: (addons-416933)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 19:24:58.131901  742115 main.go:141] libmachine: (addons-416933)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/addons-416933.rawdisk'/>
	I0729 19:24:58.131908  742115 main.go:141] libmachine: (addons-416933)       <target dev='hda' bus='virtio'/>
	I0729 19:24:58.131920  742115 main.go:141] libmachine: (addons-416933)     </disk>
	I0729 19:24:58.131935  742115 main.go:141] libmachine: (addons-416933)     <interface type='network'>
	I0729 19:24:58.131946  742115 main.go:141] libmachine: (addons-416933)       <source network='mk-addons-416933'/>
	I0729 19:24:58.131959  742115 main.go:141] libmachine: (addons-416933)       <model type='virtio'/>
	I0729 19:24:58.131975  742115 main.go:141] libmachine: (addons-416933)     </interface>
	I0729 19:24:58.131992  742115 main.go:141] libmachine: (addons-416933)     <interface type='network'>
	I0729 19:24:58.132005  742115 main.go:141] libmachine: (addons-416933)       <source network='default'/>
	I0729 19:24:58.132015  742115 main.go:141] libmachine: (addons-416933)       <model type='virtio'/>
	I0729 19:24:58.132023  742115 main.go:141] libmachine: (addons-416933)     </interface>
	I0729 19:24:58.132045  742115 main.go:141] libmachine: (addons-416933)     <serial type='pty'>
	I0729 19:24:58.132060  742115 main.go:141] libmachine: (addons-416933)       <target port='0'/>
	I0729 19:24:58.132073  742115 main.go:141] libmachine: (addons-416933)     </serial>
	I0729 19:24:58.132086  742115 main.go:141] libmachine: (addons-416933)     <console type='pty'>
	I0729 19:24:58.132097  742115 main.go:141] libmachine: (addons-416933)       <target type='serial' port='0'/>
	I0729 19:24:58.132107  742115 main.go:141] libmachine: (addons-416933)     </console>
	I0729 19:24:58.132113  742115 main.go:141] libmachine: (addons-416933)     <rng model='virtio'>
	I0729 19:24:58.132123  742115 main.go:141] libmachine: (addons-416933)       <backend model='random'>/dev/random</backend>
	I0729 19:24:58.132127  742115 main.go:141] libmachine: (addons-416933)     </rng>
	I0729 19:24:58.132132  742115 main.go:141] libmachine: (addons-416933)     
	I0729 19:24:58.132152  742115 main.go:141] libmachine: (addons-416933)     
	I0729 19:24:58.132167  742115 main.go:141] libmachine: (addons-416933)   </devices>
	I0729 19:24:58.132180  742115 main.go:141] libmachine: (addons-416933) </domain>
	I0729 19:24:58.132188  742115 main.go:141] libmachine: (addons-416933) 
	I0729 19:24:58.136575  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:b2:29:aa in network default
	I0729 19:24:58.137262  742115 main.go:141] libmachine: (addons-416933) Ensuring networks are active...
	I0729 19:24:58.137283  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:24:58.137994  742115 main.go:141] libmachine: (addons-416933) Ensuring network default is active
	I0729 19:24:58.138541  742115 main.go:141] libmachine: (addons-416933) Ensuring network mk-addons-416933 is active
	I0729 19:24:58.139013  742115 main.go:141] libmachine: (addons-416933) Getting domain xml...
	I0729 19:24:58.139687  742115 main.go:141] libmachine: (addons-416933) Creating domain...
	I0729 19:24:59.333601  742115 main.go:141] libmachine: (addons-416933) Waiting to get IP...
	I0729 19:24:59.335047  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:24:59.335509  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:24:59.335532  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:59.335486  742138 retry.go:31] will retry after 306.378465ms: waiting for machine to come up
	I0729 19:24:59.644090  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:24:59.644552  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:24:59.644583  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:24:59.644484  742138 retry.go:31] will retry after 355.138345ms: waiting for machine to come up
	I0729 19:25:00.001108  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:00.001558  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:00.001588  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:00.001505  742138 retry.go:31] will retry after 370.675351ms: waiting for machine to come up
	I0729 19:25:00.374187  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:00.374724  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:00.374751  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:00.374669  742138 retry.go:31] will retry after 518.459922ms: waiting for machine to come up
	I0729 19:25:00.894401  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:00.894862  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:00.894887  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:00.894801  742138 retry.go:31] will retry after 582.975213ms: waiting for machine to come up
	I0729 19:25:01.479604  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:01.480108  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:01.480140  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:01.480067  742138 retry.go:31] will retry after 850.632995ms: waiting for machine to come up
	I0729 19:25:02.332870  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:02.333252  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:02.333287  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:02.333213  742138 retry.go:31] will retry after 782.858632ms: waiting for machine to come up
	I0729 19:25:03.117291  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:03.117758  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:03.117791  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:03.117702  742138 retry.go:31] will retry after 1.364379659s: waiting for machine to come up
	I0729 19:25:04.484346  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:04.484792  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:04.484821  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:04.484737  742138 retry.go:31] will retry after 1.437630571s: waiting for machine to come up
	I0729 19:25:05.924565  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:05.924942  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:05.924976  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:05.924886  742138 retry.go:31] will retry after 2.196153996s: waiting for machine to come up
	I0729 19:25:08.122753  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:08.123145  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:08.123177  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:08.123099  742138 retry.go:31] will retry after 2.806200865s: waiting for machine to come up
	I0729 19:25:10.933094  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:10.933550  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:10.933592  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:10.933495  742138 retry.go:31] will retry after 2.907653846s: waiting for machine to come up
	I0729 19:25:13.843279  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:13.843716  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find current IP address of domain addons-416933 in network mk-addons-416933
	I0729 19:25:13.843748  742115 main.go:141] libmachine: (addons-416933) DBG | I0729 19:25:13.843707  742138 retry.go:31] will retry after 3.859980653s: waiting for machine to come up
	I0729 19:25:17.705857  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.706369  742115 main.go:141] libmachine: (addons-416933) Found IP for machine: 192.168.39.249
	I0729 19:25:17.706395  742115 main.go:141] libmachine: (addons-416933) Reserving static IP address...
	I0729 19:25:17.706410  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has current primary IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.706753  742115 main.go:141] libmachine: (addons-416933) DBG | unable to find host DHCP lease matching {name: "addons-416933", mac: "52:54:00:dd:df:c7", ip: "192.168.39.249"} in network mk-addons-416933
	I0729 19:25:17.782204  742115 main.go:141] libmachine: (addons-416933) DBG | Getting to WaitForSSH function...
	I0729 19:25:17.782234  742115 main.go:141] libmachine: (addons-416933) Reserved static IP address: 192.168.39.249
	I0729 19:25:17.782246  742115 main.go:141] libmachine: (addons-416933) Waiting for SSH to be available...
	I0729 19:25:17.784943  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.785408  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:17.785447  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.785557  742115 main.go:141] libmachine: (addons-416933) DBG | Using SSH client type: external
	I0729 19:25:17.785586  742115 main.go:141] libmachine: (addons-416933) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa (-rw-------)
	I0729 19:25:17.785636  742115 main.go:141] libmachine: (addons-416933) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.249 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 19:25:17.785653  742115 main.go:141] libmachine: (addons-416933) DBG | About to run SSH command:
	I0729 19:25:17.785665  742115 main.go:141] libmachine: (addons-416933) DBG | exit 0
	I0729 19:25:17.908164  742115 main.go:141] libmachine: (addons-416933) DBG | SSH cmd err, output: <nil>: 
	I0729 19:25:17.908456  742115 main.go:141] libmachine: (addons-416933) KVM machine creation complete!
	I0729 19:25:17.908807  742115 main.go:141] libmachine: (addons-416933) Calling .GetConfigRaw
	I0729 19:25:17.909381  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:17.909606  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:17.909789  742115 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 19:25:17.909804  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:17.911688  742115 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 19:25:17.911704  742115 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 19:25:17.911712  742115 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 19:25:17.911720  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:17.914081  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.914482  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:17.914519  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:17.914644  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:17.914875  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:17.915030  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:17.915176  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:17.915333  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:17.915546  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:17.915560  742115 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 19:25:18.015462  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:25:18.015489  742115 main.go:141] libmachine: Detecting the provisioner...
	I0729 19:25:18.015496  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.018679  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.019146  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.019211  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.019385  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:18.019636  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.019834  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.020023  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:18.020188  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:18.020426  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:18.020441  742115 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 19:25:18.124701  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 19:25:18.124824  742115 main.go:141] libmachine: found compatible host: buildroot
	I0729 19:25:18.124838  742115 main.go:141] libmachine: Provisioning with buildroot...
	I0729 19:25:18.124850  742115 main.go:141] libmachine: (addons-416933) Calling .GetMachineName
	I0729 19:25:18.125120  742115 buildroot.go:166] provisioning hostname "addons-416933"
	I0729 19:25:18.125149  742115 main.go:141] libmachine: (addons-416933) Calling .GetMachineName
	I0729 19:25:18.125406  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.128206  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.128668  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.128699  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.128834  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:18.129056  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.129211  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.129390  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:18.129600  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:18.129777  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:18.129790  742115 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-416933 && echo "addons-416933" | sudo tee /etc/hostname
	I0729 19:25:18.245339  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-416933
	
	I0729 19:25:18.245384  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.248201  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.248563  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.248606  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.248838  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:18.249057  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.249236  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.249423  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:18.249603  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:18.249761  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:18.249775  742115 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-416933' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-416933/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-416933' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 19:25:18.360523  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 19:25:18.360564  742115 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 19:25:18.360610  742115 buildroot.go:174] setting up certificates
	I0729 19:25:18.360623  742115 provision.go:84] configureAuth start
	I0729 19:25:18.360634  742115 main.go:141] libmachine: (addons-416933) Calling .GetMachineName
	I0729 19:25:18.360955  742115 main.go:141] libmachine: (addons-416933) Calling .GetIP
	I0729 19:25:18.363383  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.363728  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.363748  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.363947  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.365987  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.366377  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.366407  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.366550  742115 provision.go:143] copyHostCerts
	I0729 19:25:18.366642  742115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 19:25:18.366771  742115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 19:25:18.366841  742115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 19:25:18.366908  742115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.addons-416933 san=[127.0.0.1 192.168.39.249 addons-416933 localhost minikube]
	I0729 19:25:18.667288  742115 provision.go:177] copyRemoteCerts
	I0729 19:25:18.667389  742115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 19:25:18.667422  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.670558  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.670894  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.670922  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.671114  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:18.671335  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.671529  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:18.671690  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:18.754127  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 19:25:18.776534  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 19:25:18.798348  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 19:25:18.820714  742115 provision.go:87] duration metric: took 460.075953ms to configureAuth
	I0729 19:25:18.820751  742115 buildroot.go:189] setting minikube options for container-runtime
	I0729 19:25:18.820926  742115 config.go:182] Loaded profile config "addons-416933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:25:18.821007  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:18.823710  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.824067  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:18.824098  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:18.824214  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:18.824448  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.824621  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:18.824732  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:18.824875  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:18.825035  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:18.825048  742115 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 19:25:19.078357  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 19:25:19.078397  742115 main.go:141] libmachine: Checking connection to Docker...
	I0729 19:25:19.078410  742115 main.go:141] libmachine: (addons-416933) Calling .GetURL
	I0729 19:25:19.079852  742115 main.go:141] libmachine: (addons-416933) DBG | Using libvirt version 6000000
	I0729 19:25:19.082209  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.082629  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.082658  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.082877  742115 main.go:141] libmachine: Docker is up and running!
	I0729 19:25:19.082902  742115 main.go:141] libmachine: Reticulating splines...
	I0729 19:25:19.082911  742115 client.go:171] duration metric: took 22.02492688s to LocalClient.Create
	I0729 19:25:19.082937  742115 start.go:167] duration metric: took 22.02499792s to libmachine.API.Create "addons-416933"
	I0729 19:25:19.082950  742115 start.go:293] postStartSetup for "addons-416933" (driver="kvm2")
	I0729 19:25:19.082963  742115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 19:25:19.082989  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:19.083294  742115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 19:25:19.083337  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:19.086021  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.086442  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.086469  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.086628  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:19.086850  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:19.087024  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:19.087177  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:19.167660  742115 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 19:25:19.171652  742115 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 19:25:19.171679  742115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 19:25:19.171750  742115 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 19:25:19.171773  742115 start.go:296] duration metric: took 88.817271ms for postStartSetup
	I0729 19:25:19.171810  742115 main.go:141] libmachine: (addons-416933) Calling .GetConfigRaw
	I0729 19:25:19.172623  742115 main.go:141] libmachine: (addons-416933) Calling .GetIP
	I0729 19:25:19.175363  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.175659  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.175686  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.175902  742115 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/config.json ...
	I0729 19:25:19.176121  742115 start.go:128] duration metric: took 22.136683556s to createHost
	I0729 19:25:19.176148  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:19.178168  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.178636  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.178657  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.178803  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:19.179069  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:19.179276  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:19.179505  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:19.179760  742115 main.go:141] libmachine: Using SSH client type: native
	I0729 19:25:19.179950  742115 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.249 22 <nil> <nil>}
	I0729 19:25:19.179966  742115 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 19:25:19.284626  742115 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722281119.259735098
	
	I0729 19:25:19.284650  742115 fix.go:216] guest clock: 1722281119.259735098
	I0729 19:25:19.284659  742115 fix.go:229] Guest: 2024-07-29 19:25:19.259735098 +0000 UTC Remote: 2024-07-29 19:25:19.176135293 +0000 UTC m=+22.240538920 (delta=83.599805ms)
	I0729 19:25:19.284730  742115 fix.go:200] guest clock delta is within tolerance: 83.599805ms
	I0729 19:25:19.284738  742115 start.go:83] releasing machines lock for "addons-416933", held for 22.245380995s
	I0729 19:25:19.284769  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:19.285144  742115 main.go:141] libmachine: (addons-416933) Calling .GetIP
	I0729 19:25:19.287902  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.288288  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.288319  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.288437  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:19.289027  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:19.289239  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:19.289343  742115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 19:25:19.289403  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:19.289502  742115 ssh_runner.go:195] Run: cat /version.json
	I0729 19:25:19.289531  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:19.292254  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.292499  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.292587  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.292673  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.292771  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:19.292861  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:19.292884  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:19.292965  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:19.293051  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:19.293246  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:19.293303  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:19.293439  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:19.293482  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:19.293633  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:19.410293  742115 ssh_runner.go:195] Run: systemctl --version
	I0729 19:25:19.416149  742115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 19:25:19.571695  742115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 19:25:19.577426  742115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 19:25:19.577517  742115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 19:25:19.592632  742115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 19:25:19.592660  742115 start.go:495] detecting cgroup driver to use...
	I0729 19:25:19.592770  742115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 19:25:19.609126  742115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 19:25:19.622492  742115 docker.go:216] disabling cri-docker service (if available) ...
	I0729 19:25:19.622579  742115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 19:25:19.635519  742115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 19:25:19.648563  742115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 19:25:19.767791  742115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 19:25:19.919435  742115 docker.go:232] disabling docker service ...
	I0729 19:25:19.919545  742115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 19:25:19.933700  742115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 19:25:19.946390  742115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 19:25:20.084261  742115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 19:25:20.211366  742115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 19:25:20.225800  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 19:25:20.243431  742115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 19:25:20.243507  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.253796  742115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 19:25:20.253875  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.263830  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.274676  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.284824  742115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 19:25:20.294831  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.304447  742115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.320689  742115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 19:25:20.330574  742115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 19:25:20.341125  742115 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 19:25:20.341198  742115 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 19:25:20.356080  742115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 19:25:20.368129  742115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:25:20.501046  742115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 19:25:20.635687  742115 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 19:25:20.635811  742115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 19:25:20.640151  742115 start.go:563] Will wait 60s for crictl version
	I0729 19:25:20.640223  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:25:20.643462  742115 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 19:25:20.681068  742115 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 19:25:20.681170  742115 ssh_runner.go:195] Run: crio --version
	I0729 19:25:20.706923  742115 ssh_runner.go:195] Run: crio --version
	I0729 19:25:20.735201  742115 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 19:25:20.736318  742115 main.go:141] libmachine: (addons-416933) Calling .GetIP
	I0729 19:25:20.739013  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:20.739498  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:20.739530  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:20.739761  742115 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 19:25:20.743458  742115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:25:20.755444  742115 kubeadm.go:883] updating cluster {Name:addons-416933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:addons-416933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 19:25:20.755592  742115 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:25:20.755661  742115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:25:20.788675  742115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 19:25:20.788754  742115 ssh_runner.go:195] Run: which lz4
	I0729 19:25:20.792506  742115 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 19:25:20.796408  742115 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 19:25:20.796440  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 19:25:22.031201  742115 crio.go:462] duration metric: took 1.238727259s to copy over tarball
	I0729 19:25:22.031300  742115 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 19:25:24.235495  742115 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.204154319s)
	I0729 19:25:24.235540  742115 crio.go:469] duration metric: took 2.204302575s to extract the tarball
	I0729 19:25:24.235552  742115 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 19:25:24.272245  742115 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 19:25:24.310886  742115 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 19:25:24.310917  742115 cache_images.go:84] Images are preloaded, skipping loading
	I0729 19:25:24.310929  742115 kubeadm.go:934] updating node { 192.168.39.249 8443 v1.30.3 crio true true} ...
	I0729 19:25:24.311092  742115 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-416933 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-416933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 19:25:24.311186  742115 ssh_runner.go:195] Run: crio config
	I0729 19:25:24.354072  742115 cni.go:84] Creating CNI manager for ""
	I0729 19:25:24.354094  742115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:25:24.354104  742115 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 19:25:24.354126  742115 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.249 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-416933 NodeName:addons-416933 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 19:25:24.354261  742115 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-416933"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.249
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.249"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 19:25:24.354324  742115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 19:25:24.363637  742115 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 19:25:24.363716  742115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 19:25:24.372975  742115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 19:25:24.387994  742115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 19:25:24.402537  742115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0729 19:25:24.417282  742115 ssh_runner.go:195] Run: grep 192.168.39.249	control-plane.minikube.internal$ /etc/hosts
	I0729 19:25:24.420689  742115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.249	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 19:25:24.431651  742115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:25:24.555170  742115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:25:24.571455  742115 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933 for IP: 192.168.39.249
	I0729 19:25:24.571487  742115 certs.go:194] generating shared ca certs ...
	I0729 19:25:24.571511  742115 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:24.571727  742115 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 19:25:25.057750  742115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt ...
	I0729 19:25:25.057786  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt: {Name:mka25c0be5555f80bf8febb258794b80d348a452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.057962  742115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key ...
	I0729 19:25:25.057972  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key: {Name:mk86951ac71788cad6689ac2a1b5a668bd8f7600 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.058041  742115 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 19:25:25.295601  742115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt ...
	I0729 19:25:25.295630  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt: {Name:mk348e6d0621040e25d95237e14ec47b49e2205c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.295789  742115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key ...
	I0729 19:25:25.295799  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key: {Name:mk51dc3102da76b732b4761ef818eaf545aff1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.295871  742115 certs.go:256] generating profile certs ...
	I0729 19:25:25.295932  742115 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.key
	I0729 19:25:25.295946  742115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.crt with IP's: []
	I0729 19:25:25.444506  742115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.crt ...
	I0729 19:25:25.444541  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.crt: {Name:mk69741a4462aa649cab328687779608912dd1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.444721  742115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.key ...
	I0729 19:25:25.444732  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/client.key: {Name:mk3a1251fd47003c80ba09b35c46c8ec661d7cc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.444803  742115 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key.062a4770
	I0729 19:25:25.444821  742115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt.062a4770 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.249]
	I0729 19:25:25.494087  742115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt.062a4770 ...
	I0729 19:25:25.494118  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt.062a4770: {Name:mke33a84229e9835c4fae6fbaf3fe38aee8dbf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.494297  742115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key.062a4770 ...
	I0729 19:25:25.494311  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key.062a4770: {Name:mka9ed80255300ffc9fc9be2533fa9e5e2bc49d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.494385  742115 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt.062a4770 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt
	I0729 19:25:25.494457  742115 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key.062a4770 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key
	I0729 19:25:25.494506  742115 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.key
	I0729 19:25:25.494523  742115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.crt with IP's: []
	I0729 19:25:25.659135  742115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.crt ...
	I0729 19:25:25.659168  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.crt: {Name:mk087fd2a5043d1209ac0929e19b1c9293fc6b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.659339  742115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.key ...
	I0729 19:25:25.659355  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.key: {Name:mk710d2bbd9f300c556ea12b4c6bfd4b6dc8c64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:25.659523  742115 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 19:25:25.659562  742115 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 19:25:25.659589  742115 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 19:25:25.659612  742115 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 19:25:25.660214  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 19:25:25.687310  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 19:25:25.709213  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 19:25:25.732125  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 19:25:25.754139  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 19:25:25.775828  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 19:25:25.796773  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 19:25:25.818874  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/addons-416933/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 19:25:25.840399  742115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 19:25:25.862025  742115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 19:25:25.877187  742115 ssh_runner.go:195] Run: openssl version
	I0729 19:25:25.882608  742115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 19:25:25.892265  742115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:25.896686  742115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:25.896746  742115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 19:25:25.902224  742115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 19:25:25.912128  742115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 19:25:25.916004  742115 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 19:25:25.916086  742115 kubeadm.go:392] StartCluster: {Name:addons-416933 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:addons-416933 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:25:25.916173  742115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 19:25:25.916219  742115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 19:25:25.954991  742115 cri.go:89] found id: ""
	I0729 19:25:25.955083  742115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 19:25:25.964976  742115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 19:25:25.974352  742115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 19:25:25.983633  742115 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 19:25:25.983660  742115 kubeadm.go:157] found existing configuration files:
	
	I0729 19:25:25.983706  742115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 19:25:25.991933  742115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 19:25:25.991990  742115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 19:25:26.000661  742115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 19:25:26.008949  742115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 19:25:26.009002  742115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 19:25:26.017541  742115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 19:25:26.025908  742115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 19:25:26.025985  742115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 19:25:26.037225  742115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 19:25:26.052495  742115 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 19:25:26.052567  742115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 19:25:26.062200  742115 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 19:25:26.248699  742115 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 19:25:36.052397  742115 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 19:25:36.052483  742115 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 19:25:36.052584  742115 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 19:25:36.052721  742115 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 19:25:36.052826  742115 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 19:25:36.052880  742115 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 19:25:36.054429  742115 out.go:204]   - Generating certificates and keys ...
	I0729 19:25:36.054536  742115 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 19:25:36.054623  742115 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 19:25:36.054705  742115 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 19:25:36.054781  742115 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 19:25:36.054859  742115 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 19:25:36.054930  742115 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 19:25:36.055006  742115 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 19:25:36.055185  742115 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-416933 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0729 19:25:36.055239  742115 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 19:25:36.055385  742115 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-416933 localhost] and IPs [192.168.39.249 127.0.0.1 ::1]
	I0729 19:25:36.055477  742115 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 19:25:36.055567  742115 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 19:25:36.055638  742115 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 19:25:36.055718  742115 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 19:25:36.055762  742115 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 19:25:36.055809  742115 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 19:25:36.055867  742115 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 19:25:36.055955  742115 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 19:25:36.056057  742115 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 19:25:36.056170  742115 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 19:25:36.056267  742115 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 19:25:36.057685  742115 out.go:204]   - Booting up control plane ...
	I0729 19:25:36.057772  742115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 19:25:36.057861  742115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 19:25:36.057960  742115 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 19:25:36.058087  742115 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 19:25:36.058179  742115 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 19:25:36.058212  742115 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 19:25:36.058315  742115 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 19:25:36.058381  742115 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 19:25:36.058443  742115 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.011108ms
	I0729 19:25:36.058526  742115 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 19:25:36.058589  742115 kubeadm.go:310] [api-check] The API server is healthy after 5.002020794s
	I0729 19:25:36.058675  742115 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 19:25:36.058781  742115 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 19:25:36.058855  742115 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 19:25:36.059016  742115 kubeadm.go:310] [mark-control-plane] Marking the node addons-416933 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 19:25:36.059073  742115 kubeadm.go:310] [bootstrap-token] Using token: 3377ua.se4nqahll51l3yjr
	I0729 19:25:36.060540  742115 out.go:204]   - Configuring RBAC rules ...
	I0729 19:25:36.060627  742115 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 19:25:36.060712  742115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 19:25:36.060864  742115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 19:25:36.061030  742115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 19:25:36.061173  742115 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 19:25:36.061247  742115 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 19:25:36.061351  742115 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 19:25:36.061388  742115 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 19:25:36.061427  742115 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 19:25:36.061433  742115 kubeadm.go:310] 
	I0729 19:25:36.061484  742115 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 19:25:36.061490  742115 kubeadm.go:310] 
	I0729 19:25:36.061566  742115 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 19:25:36.061574  742115 kubeadm.go:310] 
	I0729 19:25:36.061603  742115 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 19:25:36.061685  742115 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 19:25:36.061748  742115 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 19:25:36.061759  742115 kubeadm.go:310] 
	I0729 19:25:36.061833  742115 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 19:25:36.061848  742115 kubeadm.go:310] 
	I0729 19:25:36.061917  742115 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 19:25:36.061926  742115 kubeadm.go:310] 
	I0729 19:25:36.061982  742115 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 19:25:36.062055  742115 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 19:25:36.062112  742115 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 19:25:36.062118  742115 kubeadm.go:310] 
	I0729 19:25:36.062194  742115 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 19:25:36.062265  742115 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 19:25:36.062280  742115 kubeadm.go:310] 
	I0729 19:25:36.062354  742115 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3377ua.se4nqahll51l3yjr \
	I0729 19:25:36.062440  742115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 \
	I0729 19:25:36.062477  742115 kubeadm.go:310] 	--control-plane 
	I0729 19:25:36.062486  742115 kubeadm.go:310] 
	I0729 19:25:36.062566  742115 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 19:25:36.062575  742115 kubeadm.go:310] 
	I0729 19:25:36.062657  742115 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3377ua.se4nqahll51l3yjr \
	I0729 19:25:36.062734  742115 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 
	I0729 19:25:36.062759  742115 cni.go:84] Creating CNI manager for ""
	I0729 19:25:36.062768  742115 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:25:36.064169  742115 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 19:25:36.065441  742115 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 19:25:36.076080  742115 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 19:25:36.093227  742115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 19:25:36.093331  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:36.093341  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-416933 minikube.k8s.io/updated_at=2024_07_29T19_25_36_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=addons-416933 minikube.k8s.io/primary=true
	I0729 19:25:36.237980  742115 ops.go:34] apiserver oom_adj: -16
	I0729 19:25:36.238176  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:36.739143  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:37.238776  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:37.738998  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:38.238439  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:38.738212  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:39.238649  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:39.738816  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:40.239250  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:40.739150  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:41.238834  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:41.738501  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:42.238283  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:42.739142  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:43.238662  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:43.738633  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:44.239269  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:44.738453  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:45.238550  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:45.738552  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:46.239264  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:46.738848  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:47.238829  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:47.738838  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:48.238396  742115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 19:25:48.317168  742115 kubeadm.go:1113] duration metric: took 12.223911653s to wait for elevateKubeSystemPrivileges
	I0729 19:25:48.317202  742115 kubeadm.go:394] duration metric: took 22.401124608s to StartCluster
	I0729 19:25:48.317223  742115 settings.go:142] acquiring lock: {Name:mk9a2eb797f60b19768f4bfa250a8d2214a5ca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:48.317377  742115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 19:25:48.318019  742115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:25:48.318277  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 19:25:48.318387  742115 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.249 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 19:25:48.318466  742115 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 19:25:48.318583  742115 addons.go:69] Setting gcp-auth=true in profile "addons-416933"
	I0729 19:25:48.318609  742115 addons.go:69] Setting volumesnapshots=true in profile "addons-416933"
	I0729 19:25:48.318610  742115 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-416933"
	I0729 19:25:48.318614  742115 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-416933"
	I0729 19:25:48.318632  742115 mustload.go:65] Loading cluster: addons-416933
	I0729 19:25:48.318639  742115 addons.go:234] Setting addon volumesnapshots=true in "addons-416933"
	I0729 19:25:48.318630  742115 addons.go:69] Setting volcano=true in profile "addons-416933"
	I0729 19:25:48.318651  742115 addons.go:69] Setting inspektor-gadget=true in profile "addons-416933"
	I0729 19:25:48.318660  742115 addons.go:69] Setting storage-provisioner=true in profile "addons-416933"
	I0729 19:25:48.318669  742115 addons.go:234] Setting addon volcano=true in "addons-416933"
	I0729 19:25:48.318676  742115 addons.go:234] Setting addon storage-provisioner=true in "addons-416933"
	I0729 19:25:48.318677  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318684  742115 addons.go:234] Setting addon inspektor-gadget=true in "addons-416933"
	I0729 19:25:48.318659  742115 addons.go:69] Setting ingress-dns=true in profile "addons-416933"
	I0729 19:25:48.318694  742115 addons.go:69] Setting helm-tiller=true in profile "addons-416933"
	I0729 19:25:48.318707  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318712  742115 addons.go:234] Setting addon ingress-dns=true in "addons-416933"
	I0729 19:25:48.318717  742115 addons.go:234] Setting addon helm-tiller=true in "addons-416933"
	I0729 19:25:48.318720  742115 addons.go:69] Setting metrics-server=true in profile "addons-416933"
	I0729 19:25:48.318726  742115 addons.go:69] Setting cloud-spanner=true in profile "addons-416933"
	I0729 19:25:48.318743  742115 addons.go:234] Setting addon metrics-server=true in "addons-416933"
	I0729 19:25:48.318744  742115 addons.go:234] Setting addon cloud-spanner=true in "addons-416933"
	I0729 19:25:48.318755  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318759  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318762  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318766  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318854  742115 config.go:182] Loaded profile config "addons-416933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:25:48.318957  742115 addons.go:69] Setting default-storageclass=true in profile "addons-416933"
	I0729 19:25:48.318989  742115 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-416933"
	I0729 19:25:48.318593  742115 addons.go:69] Setting yakd=true in profile "addons-416933"
	I0729 19:25:48.318707  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.319178  742115 addons.go:234] Setting addon yakd=true in "addons-416933"
	I0729 19:25:48.319189  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.318652  742115 addons.go:69] Setting registry=true in profile "addons-416933"
	I0729 19:25:48.318712  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.319204  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319212  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319217  742115 addons.go:234] Setting addon registry=true in "addons-416933"
	I0729 19:25:48.319244  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.319247  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319333  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319354  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319543  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319588  742115 addons.go:69] Setting ingress=true in profile "addons-416933"
	I0729 19:25:48.319608  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319619  742115 addons.go:234] Setting addon ingress=true in "addons-416933"
	I0729 19:25:48.319647  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.318717  742115 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-416933"
	I0729 19:25:48.319701  742115 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-416933"
	I0729 19:25:48.319728  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.319770  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319787  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319809  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319809  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319189  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.318590  742115 config.go:182] Loaded profile config "addons-416933": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 19:25:48.319967  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319972  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.319198  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.320008  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.320083  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.320104  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.320166  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.320235  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319543  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.320481  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319573  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.320575  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.318641  742115 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-416933"
	I0729 19:25:48.321638  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.321698  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.318642  742115 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-416933"
	I0729 19:25:48.322065  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.322441  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.322486  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.319189  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.323137  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.327365  742115 out.go:177] * Verifying Kubernetes components...
	I0729 19:25:48.328897  742115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 19:25:48.340792  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41887
	I0729 19:25:48.340958  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36655
	I0729 19:25:48.341041  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0729 19:25:48.341450  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.341618  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35859
	I0729 19:25:48.341839  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.341980  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.342205  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.342222  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.342393  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.342409  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.342568  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.342583  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.342668  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44113
	I0729 19:25:48.342857  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.342915  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.343133  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.343614  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.343653  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.356170  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33131
	I0729 19:25:48.356184  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.356254  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.356307  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.356320  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.356347  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44635
	I0729 19:25:48.356354  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.356463  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41459
	I0729 19:25:48.356673  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.356717  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.359475  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.359810  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.359828  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.359890  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.359904  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.359916  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.359964  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.360471  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.360501  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.375568  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.375858  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.375888  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.375888  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.375858  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.376581  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.376629  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.377005  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.377125  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.377143  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.377655  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.377686  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.377805  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.377986  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.378384  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.378470  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34849
	I0729 19:25:48.378785  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.378800  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.378905  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.378997  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0729 19:25:48.379337  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.393966  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38753
	I0729 19:25:48.394183  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36199
	I0729 19:25:48.394336  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.395003  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.395033  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.395121  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32791
	I0729 19:25:48.395458  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.395490  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.396106  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.396151  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.396244  742115 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-416933"
	I0729 19:25:48.396319  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.396637  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.396857  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.396886  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.397145  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.397210  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.397386  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.397429  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.397612  742115 addons.go:234] Setting addon default-storageclass=true in "addons-416933"
	I0729 19:25:48.397645  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:48.397830  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.397863  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.398132  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.398154  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.398204  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.398215  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.398250  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.398317  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.398544  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38557
	I0729 19:25:48.398777  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.399015  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.399058  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.399370  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.399480  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.399500  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.399582  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0729 19:25:48.399853  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.399872  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.407777  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.407794  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32959
	I0729 19:25:48.407853  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.407867  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.407899  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.407938  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.408268  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.408374  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.408390  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.408714  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.408755  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.408840  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.408912  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.409607  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.409683  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.409700  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.409702  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.409756  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.410330  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.410399  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.410602  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.411969  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37689
	I0729 19:25:48.413344  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.413679  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.414359  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.414379  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.414815  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.415120  742115 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0729 19:25:48.415652  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.415681  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.416799  742115 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0729 19:25:48.416818  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0729 19:25:48.416838  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.418248  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33733
	I0729 19:25:48.418277  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39625
	I0729 19:25:48.418804  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.418848  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.419322  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.419341  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.419544  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.419568  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.419857  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.419907  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.420336  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.420395  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.420410  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.420640  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.420659  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.420857  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.421065  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.421234  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.421595  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.422899  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.423378  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.423677  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:48.423691  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:48.425695  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:48.425707  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0729 19:25:48.425712  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:48.425725  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:48.425733  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:48.425836  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36575
	I0729 19:25:48.426048  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.426294  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.426630  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.426648  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.426828  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.426855  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.426962  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.427111  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.427167  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.427565  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.428664  742115 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 19:25:48.428767  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.429165  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:48.429193  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:48.429512  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	W0729 19:25:48.429599  742115 out.go:239] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0729 19:25:48.429936  742115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:25:48.429959  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 19:25:48.429978  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.429978  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.430494  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 19:25:48.431369  742115 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0729 19:25:48.431467  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 19:25:48.431480  742115 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 19:25:48.431501  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.432577  742115 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 19:25:48.432604  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 19:25:48.432624  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.433195  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.433890  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.433922  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.434087  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.434248  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.434391  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.434545  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.435395  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.435829  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.435856  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.436177  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.436399  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.436472  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.436530  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.436652  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.436862  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.436894  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.437422  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.437449  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.437615  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.437782  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.437962  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.438123  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.440165  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37163
	I0729 19:25:48.440677  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.441176  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.441200  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.441559  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.441856  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.443392  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41037
	I0729 19:25:48.443670  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.444133  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.444794  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.444814  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.445549  742115 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 19:25:48.446427  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39681
	I0729 19:25:48.447093  742115 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 19:25:48.447112  742115 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 19:25:48.447135  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.447205  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.447439  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.447670  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43497
	I0729 19:25:48.448163  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.448329  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.448861  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.448878  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.449254  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.449477  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.450605  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.450622  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.451104  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.451320  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.451459  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.451505  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.451986  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.452007  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.452278  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.452450  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.452562  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.452662  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.453119  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.453131  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 19:25:48.454158  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.454246  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39957
	I0729 19:25:48.454907  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.455294  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 19:25:48.455311  742115 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 19:25:48.455320  742115 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.1
	I0729 19:25:48.455419  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.455441  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.455861  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.456454  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.456501  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.456567  742115 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 19:25:48.456588  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 19:25:48.456607  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.457651  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 19:25:48.457674  742115 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 19:25:48.457690  742115 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 19:25:48.457708  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.459878  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 19:25:48.460465  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.460939  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.460973  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.461218  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.461424  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.461478  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.461646  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.461827  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.462078  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 19:25:48.462132  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.462150  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.462279  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.462423  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.462552  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.462677  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.464191  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 19:25:48.465085  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I0729 19:25:48.465131  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40947
	I0729 19:25:48.465630  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.465658  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.466069  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.466089  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.466326  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 19:25:48.466352  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.466370  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.466439  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.466698  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.466865  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.466912  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.468417  742115 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 19:25:48.469105  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.469175  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.469440  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 19:25:48.469460  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 19:25:48.469488  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.470477  742115 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 19:25:48.470523  742115 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 19:25:48.470712  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33417
	I0729 19:25:48.471748  742115 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 19:25:48.471769  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 19:25:48.471786  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.471885  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.472271  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.472501  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.472521  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.472819  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.472842  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.473014  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.473159  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.473188  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.473340  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.473357  742115 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 19:25:48.473390  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.473570  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34483
	I0729 19:25:48.473566  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.474491  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.475046  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.475063  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.475328  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.475821  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.475848  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.475978  742115 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 19:25:48.475992  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.476207  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.476365  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.476494  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.476776  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.477250  742115 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 19:25:48.477276  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 19:25:48.477293  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.477316  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:48.477353  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:48.478119  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I0729 19:25:48.478555  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.479081  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.479105  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.479457  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.479751  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.480290  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34201
	I0729 19:25:48.480833  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.481053  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.481503  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.481523  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.481699  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.481838  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.481848  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.481897  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.482304  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.482332  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.482485  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.482548  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.482678  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.483523  742115 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 19:25:48.484326  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.484574  742115 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 19:25:48.484592  742115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 19:25:48.484610  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.484943  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0729 19:25:48.485388  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.485929  742115 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 19:25:48.486137  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.486206  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.486912  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.487192  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.487501  742115 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 19:25:48.487517  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 19:25:48.487534  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.488115  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.488480  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.488515  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.488753  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.489080  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.489404  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.489915  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.489964  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.491281  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.491676  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.491699  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.491790  742115 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 19:25:48.491911  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.492041  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.492220  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.492392  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.493367  742115 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 19:25:48.493384  742115 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 19:25:48.493398  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.496245  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.496608  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.496639  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.496769  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.496885  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.496972  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.497083  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.497997  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46231
	I0729 19:25:48.498310  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:48.498764  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:48.498777  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:48.499031  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:48.499178  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:48.500475  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:48.502149  742115 out.go:177]   - Using image docker.io/busybox:stable
	I0729 19:25:48.503366  742115 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 19:25:48.504559  742115 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 19:25:48.504575  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 19:25:48.504588  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:48.507038  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.507392  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:48.507420  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:48.507544  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:48.507700  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:48.507837  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:48.507977  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:48.778402  742115 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0729 19:25:48.778433  742115 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0729 19:25:48.813232  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 19:25:48.813254  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 19:25:48.838345  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 19:25:48.852097  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 19:25:48.866350  742115 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 19:25:48.866380  742115 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 19:25:48.867995  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 19:25:48.952961  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 19:25:48.969884  742115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 19:25:48.969915  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 19:25:48.970300  742115 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 19:25:48.970326  742115 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 19:25:49.026614  742115 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 19:25:49.026639  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 19:25:49.041899  742115 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 19:25:49.041930  742115 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0729 19:25:49.043418  742115 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 19:25:49.043441  742115 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 19:25:49.045827  742115 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 19:25:49.045847  742115 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 19:25:49.049147  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 19:25:49.049166  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 19:25:49.059453  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 19:25:49.067439  742115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 19:25:49.067606  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 19:25:49.086975  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 19:25:49.108150  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 19:25:49.122570  742115 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 19:25:49.122604  742115 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 19:25:49.164402  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 19:25:49.172505  742115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 19:25:49.172549  742115 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 19:25:49.180821  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0729 19:25:49.201412  742115 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 19:25:49.201439  742115 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 19:25:49.214061  742115 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 19:25:49.214098  742115 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 19:25:49.254185  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 19:25:49.254221  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 19:25:49.284977  742115 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 19:25:49.285000  742115 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 19:25:49.366563  742115 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:25:49.366595  742115 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 19:25:49.370199  742115 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 19:25:49.370228  742115 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 19:25:49.407874  742115 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 19:25:49.407908  742115 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 19:25:49.447287  742115 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 19:25:49.447333  742115 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 19:25:49.490131  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 19:25:49.490164  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 19:25:49.511464  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 19:25:49.513485  742115 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 19:25:49.513508  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 19:25:49.571326  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 19:25:49.571355  742115 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 19:25:49.599264  742115 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 19:25:49.599298  742115 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 19:25:49.661564  742115 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 19:25:49.661592  742115 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 19:25:49.674545  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 19:25:49.758809  742115 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 19:25:49.758844  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 19:25:49.770442  742115 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 19:25:49.770470  742115 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 19:25:49.849888  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 19:25:49.849922  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 19:25:49.929258  742115 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 19:25:49.929286  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 19:25:49.943506  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 19:25:50.035438  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 19:25:50.035482  742115 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 19:25:50.090065  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 19:25:50.160541  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.32215538s)
	I0729 19:25:50.160622  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:50.160632  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:50.160968  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:50.161033  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:50.161047  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:50.161060  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:50.161073  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:50.161321  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:50.161366  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:50.175308  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:50.175336  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:50.175685  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:50.175708  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:50.175720  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:50.356410  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 19:25:50.356440  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 19:25:50.581164  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 19:25:50.581197  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 19:25:50.764170  742115 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 19:25:50.764211  742115 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 19:25:51.119081  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 19:25:53.034646  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.182501233s)
	I0729 19:25:53.034665  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.166646694s)
	I0729 19:25:53.034704  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:53.034716  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:53.034704  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:53.034782  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:53.035036  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:53.035075  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:53.035079  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:53.035100  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:53.035113  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:53.035115  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:53.035124  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:53.035128  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:53.035221  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:53.035235  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:53.035447  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:53.035453  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:53.035466  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:53.035471  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:55.483672  742115 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 19:25:55.483723  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:55.487288  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:55.487724  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:55.487746  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:55.488004  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:55.488251  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:55.488455  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:55.488620  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:56.091947  742115 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 19:25:56.207441  742115 addons.go:234] Setting addon gcp-auth=true in "addons-416933"
	I0729 19:25:56.207513  742115 host.go:66] Checking if "addons-416933" exists ...
	I0729 19:25:56.207850  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:56.207877  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:56.224105  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0729 19:25:56.224720  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:56.225225  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:56.225244  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:56.225690  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:56.226268  742115 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 19:25:56.226317  742115 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 19:25:56.242452  742115 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39529
	I0729 19:25:56.242923  742115 main.go:141] libmachine: () Calling .GetVersion
	I0729 19:25:56.243456  742115 main.go:141] libmachine: Using API Version  1
	I0729 19:25:56.243481  742115 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 19:25:56.243884  742115 main.go:141] libmachine: () Calling .GetMachineName
	I0729 19:25:56.244091  742115 main.go:141] libmachine: (addons-416933) Calling .GetState
	I0729 19:25:56.245856  742115 main.go:141] libmachine: (addons-416933) Calling .DriverName
	I0729 19:25:56.246134  742115 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 19:25:56.246157  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHHostname
	I0729 19:25:56.249350  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:56.249808  742115 main.go:141] libmachine: (addons-416933) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:df:c7", ip: ""} in network mk-addons-416933: {Iface:virbr1 ExpiryTime:2024-07-29 20:25:11 +0000 UTC Type:0 Mac:52:54:00:dd:df:c7 Iaid: IPaddr:192.168.39.249 Prefix:24 Hostname:addons-416933 Clientid:01:52:54:00:dd:df:c7}
	I0729 19:25:56.249835  742115 main.go:141] libmachine: (addons-416933) DBG | domain addons-416933 has defined IP address 192.168.39.249 and MAC address 52:54:00:dd:df:c7 in network mk-addons-416933
	I0729 19:25:56.250019  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHPort
	I0729 19:25:56.250195  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHKeyPath
	I0729 19:25:56.250415  742115 main.go:141] libmachine: (addons-416933) Calling .GetSSHUsername
	I0729 19:25:56.250623  742115 sshutil.go:53] new ssh client: &{IP:192.168.39.249 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/addons-416933/id_rsa Username:docker}
	I0729 19:25:56.505039  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.552026833s)
	I0729 19:25:56.505102  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505104  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.445618418s)
	I0729 19:25:56.505167  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505190  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505199  742115 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (7.437568936s)
	I0729 19:25:56.505116  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505256  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.418255287s)
	I0729 19:25:56.505283  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505176  742115 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.437710429s)
	I0729 19:25:56.505325  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.340896391s)
	I0729 19:25:56.505345  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505349  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505355  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505223  742115 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 19:25:56.505450  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.324573465s)
	I0729 19:25:56.505484  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.993988151s)
	I0729 19:25:56.505505  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505295  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.397115815s)
	I0729 19:25:56.505514  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505535  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505549  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505589  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.831018087s)
	I0729 19:25:56.505604  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505615  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.505772  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.562228636s)
	W0729 19:25:56.505808  742115 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 19:25:56.505838  742115 retry.go:31] will retry after 308.846466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 19:25:56.505919  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.41581815s)
	I0729 19:25:56.505941  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.505951  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.506104  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.506125  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.506439  742115 node_ready.go:35] waiting up to 6m0s for node "addons-416933" to be "Ready" ...
	I0729 19:25:56.508646  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508656  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508682  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508690  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508700  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508701  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508709  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508717  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508718  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508726  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508735  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508744  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508751  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508758  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508683  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508700  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508787  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508796  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508805  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508814  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508818  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508865  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508872  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508878  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508887  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508889  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508896  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508901  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508904  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508909  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.508918  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.508926  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.508933  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508954  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.508992  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.509004  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.509017  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.509031  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.509083  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.509107  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.509114  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.509123  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.509130  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.508880  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.509171  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.509261  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.509294  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.509302  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.509405  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.509416  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.509425  742115 addons.go:475] Verifying addon metrics-server=true in "addons-416933"
	I0729 19:25:56.509776  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.509822  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.509831  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.509847  742115 addons.go:475] Verifying addon ingress=true in "addons-416933"
	I0729 19:25:56.510063  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.510101  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510111  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510128  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510145  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510185  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510198  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510233  742115 addons.go:475] Verifying addon registry=true in "addons-416933"
	I0729 19:25:56.510262  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510270  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510637  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.510678  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510695  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510947  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.510964  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.510970  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:56.511589  742115 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-416933 service yakd-dashboard -n yakd-dashboard
	
	I0729 19:25:56.511596  742115 out.go:177] * Verifying ingress addon...
	I0729 19:25:56.513457  742115 out.go:177] * Verifying registry addon...
	I0729 19:25:56.514406  742115 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 19:25:56.514670  742115 node_ready.go:49] node "addons-416933" has status "Ready":"True"
	I0729 19:25:56.514695  742115 node_ready.go:38] duration metric: took 8.233006ms for node "addons-416933" to be "Ready" ...
	I0729 19:25:56.514707  742115 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:25:56.515646  742115 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 19:25:56.553186  742115 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 19:25:56.553209  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:56.556504  742115 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 19:25:56.556529  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:56.562874  742115 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-k67fb" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:56.569267  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:56.569290  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:56.569583  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:56.569608  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:56.815454  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 19:25:57.009947  742115 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-416933" context rescaled to 1 replicas
	I0729 19:25:57.022500  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:57.023887  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:57.526346  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:57.529777  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:57.806681  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.687534713s)
	I0729 19:25:57.806750  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:57.806765  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:57.806687  742115 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.560524888s)
	I0729 19:25:57.807066  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:57.807084  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:57.807096  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:57.807104  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:57.807108  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:57.807365  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:57.807380  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:57.807393  742115 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-416933"
	I0729 19:25:57.808816  742115 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 19:25:57.808829  742115 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 19:25:57.810223  742115 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 19:25:57.811166  742115 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 19:25:57.811381  742115 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 19:25:57.811401  742115 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 19:25:57.836613  742115 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 19:25:57.836643  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:25:57.912908  742115 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 19:25:57.912942  742115 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 19:25:58.019431  742115 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 19:25:58.019456  742115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 19:25:58.028517  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:58.028784  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:58.079210  742115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 19:25:58.317811  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:25:58.519190  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:58.521382  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:58.570078  742115 pod_ready.go:102] pod "coredns-7db6d8ff4d-k67fb" in "kube-system" namespace has status "Ready":"False"
	I0729 19:25:58.621761  742115 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.806243344s)
	I0729 19:25:58.621838  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:58.621857  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:58.622234  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:58.622312  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:58.622330  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:58.622345  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:58.622356  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:58.622632  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:58.622667  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:58.622685  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:58.834192  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:25:58.980866  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:58.980908  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:58.981254  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:58.981279  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:58.981312  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:58.981331  742115 main.go:141] libmachine: Making call to close driver server
	I0729 19:25:58.981345  742115 main.go:141] libmachine: (addons-416933) Calling .Close
	I0729 19:25:58.981602  742115 main.go:141] libmachine: Successfully made call to close driver server
	I0729 19:25:58.981663  742115 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 19:25:58.981728  742115 main.go:141] libmachine: (addons-416933) DBG | Closing plugin on server side
	I0729 19:25:58.983699  742115 addons.go:475] Verifying addon gcp-auth=true in "addons-416933"
	I0729 19:25:58.985229  742115 out.go:177] * Verifying gcp-auth addon...
	I0729 19:25:58.987563  742115 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 19:25:59.021330  742115 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 19:25:59.021358  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:25:59.032404  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:59.035207  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:59.317207  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:25:59.492299  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:25:59.519346  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:25:59.520905  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:25:59.569202  742115 pod_ready.go:92] pod "coredns-7db6d8ff4d-k67fb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.569227  742115 pod_ready.go:81] duration metric: took 3.006328401s for pod "coredns-7db6d8ff4d-k67fb" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.569239  742115 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vsjwb" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.580088  742115 pod_ready.go:92] pod "coredns-7db6d8ff4d-vsjwb" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.580117  742115 pod_ready.go:81] duration metric: took 10.869163ms for pod "coredns-7db6d8ff4d-vsjwb" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.580130  742115 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.585104  742115 pod_ready.go:92] pod "etcd-addons-416933" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.585133  742115 pod_ready.go:81] duration metric: took 4.993603ms for pod "etcd-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.585145  742115 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.593509  742115 pod_ready.go:92] pod "kube-apiserver-addons-416933" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.593539  742115 pod_ready.go:81] duration metric: took 8.38429ms for pod "kube-apiserver-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.593551  742115 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.599029  742115 pod_ready.go:92] pod "kube-controller-manager-addons-416933" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.599058  742115 pod_ready.go:81] duration metric: took 5.499038ms for pod "kube-controller-manager-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.599075  742115 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8lc4w" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.817162  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:25:59.967162  742115 pod_ready.go:92] pod "kube-proxy-8lc4w" in "kube-system" namespace has status "Ready":"True"
	I0729 19:25:59.967200  742115 pod_ready.go:81] duration metric: took 368.114818ms for pod "kube-proxy-8lc4w" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.967214  742115 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:25:59.991092  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:00.019053  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:00.025080  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:00.317505  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:00.367346  742115 pod_ready.go:92] pod "kube-scheduler-addons-416933" in "kube-system" namespace has status "Ready":"True"
	I0729 19:26:00.367379  742115 pod_ready.go:81] duration metric: took 400.155182ms for pod "kube-scheduler-addons-416933" in "kube-system" namespace to be "Ready" ...
	I0729 19:26:00.367395  742115 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace to be "Ready" ...
	I0729 19:26:00.491976  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:00.518983  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:00.520103  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:00.816993  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:00.991288  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:01.019492  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:01.020409  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:01.318588  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:01.491795  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:01.518380  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:01.520100  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:01.823387  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:01.991613  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:02.020938  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:02.021207  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:02.318152  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:02.374145  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:02.491048  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:02.518897  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:02.519919  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:02.817518  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:02.991591  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:03.020998  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:03.021462  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:03.317568  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:03.491552  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:03.520258  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:03.521614  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:03.817290  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:03.992517  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:04.021019  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:04.022695  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:04.317020  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:04.491588  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:04.519435  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:04.523077  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:04.817492  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:04.872831  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:04.998630  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:05.021094  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:05.021344  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:05.316630  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:05.491360  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:05.518947  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:05.521884  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:05.816360  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:05.991294  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:06.020264  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:06.021443  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:06.316724  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:06.491298  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:06.519336  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:06.520966  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:06.822835  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:06.875381  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:06.990921  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:07.019337  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:07.020324  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:07.316911  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:07.491320  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:07.519266  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:07.521162  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:07.817447  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:07.992864  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:08.020217  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:08.022693  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:08.316647  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:08.492289  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:08.519234  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:08.523734  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:08.816113  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:08.992160  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:09.019089  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:09.024288  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:09.316981  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:09.374218  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:09.622436  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:09.623246  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:09.628400  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:09.818192  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:09.993251  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:10.019043  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:10.021697  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:10.316511  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:10.491865  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:10.518683  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:10.525572  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:10.815866  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:10.991473  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:11.018985  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:11.021138  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:11.318011  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:11.491317  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:11.520185  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:11.521822  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:11.818817  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:11.873554  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:11.991846  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:12.018392  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:12.020881  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:12.316578  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:12.491350  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:12.520011  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:12.532003  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:12.816931  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:12.991161  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:13.020001  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:13.020887  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:13.316657  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:13.491394  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:13.521270  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:13.522178  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:13.992716  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:13.992739  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:13.993166  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:14.019277  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:14.020668  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:14.317515  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:14.492369  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:14.523669  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:14.525127  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:14.816769  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:14.991539  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:15.021989  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:15.022447  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:15.316074  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:15.491579  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:15.521588  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:15.522468  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:15.816984  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:15.991941  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:16.018841  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:16.020555  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:16.317934  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:16.374607  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:16.492461  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:16.521917  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:16.523909  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:16.816415  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:16.990911  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:17.018611  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:17.020826  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:17.317327  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:17.492068  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:17.519063  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:17.522257  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:17.817757  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:17.991356  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:18.021517  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:18.021840  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:18.316980  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:18.491037  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:18.519511  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:18.520437  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:18.816690  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:18.876210  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:18.991919  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:19.018776  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:19.022088  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:19.316719  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:19.624208  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:19.624652  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:19.624719  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:19.816528  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:19.990992  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:20.020447  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:20.020813  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:20.316903  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:20.491111  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:20.519240  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:20.521449  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:20.817083  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:20.991769  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:21.018316  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:21.019877  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:21.317370  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:21.373209  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:21.494762  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:21.517946  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:21.520554  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:21.816095  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:21.991476  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:22.019566  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:22.022447  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:22.316816  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:22.491104  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:22.519091  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:22.522568  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:22.816681  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:22.992139  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:23.020851  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:23.021185  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:23.317769  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:23.491578  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:23.520886  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:23.521321  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:23.819839  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:23.875517  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:23.991645  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:24.020846  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:24.020990  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:24.316483  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:24.491559  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:24.524221  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:24.524389  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:24.820263  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:24.991450  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:25.019367  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:25.022662  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:25.317434  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:25.491682  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:25.520910  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:25.521736  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:25.816428  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:25.991485  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:26.019207  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:26.021133  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:26.317130  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:26.373132  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:26.504021  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:26.525349  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:26.525698  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:26.830579  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:26.991418  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:27.018968  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:27.022150  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:27.316553  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:27.491095  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:27.518701  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:27.519485  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:27.816569  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:27.990946  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:28.018425  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:28.020408  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:28.315869  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:28.491764  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:28.521557  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:28.521979  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:28.817287  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:28.873643  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:28.991628  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:29.021446  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:29.021471  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:29.330891  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:29.491926  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:29.520845  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:29.523876  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:29.816745  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:29.991810  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:30.018573  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:30.019993  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:30.317339  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:30.490913  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:30.522068  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:30.522150  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:30.817932  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:30.991805  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:31.018666  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:31.020299  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:31.316281  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:31.373026  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:31.492776  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:31.519681  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:31.522081  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:31.816638  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:31.991451  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:32.019431  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:32.022010  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:32.316790  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:32.490968  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:32.519230  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:32.521896  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:32.817225  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:32.991609  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:33.020508  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:33.020841  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:33.317091  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:33.373758  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:33.491599  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:33.520677  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:33.521145  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:34.157197  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:34.157547  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:34.158349  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:34.158495  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:34.316272  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:34.491621  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:34.521721  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:34.521746  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:34.817194  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:34.991700  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:35.019108  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:35.020688  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:35.316831  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:35.491617  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:35.521741  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:35.522326  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:35.816766  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:35.873180  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:35.991603  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:36.020115  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:36.020573  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:36.317080  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:36.490759  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:36.519840  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:36.525606  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 19:26:36.816783  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:36.991381  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:37.019329  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:37.020910  742115 kapi.go:107] duration metric: took 40.505260167s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 19:26:37.316165  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:37.491635  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:37.518854  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:38.205279  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:38.208637  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:38.208766  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:38.209675  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:38.317079  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:38.491455  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:38.518857  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:38.816161  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:38.991875  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:39.018792  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:39.316258  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:39.490537  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:39.518940  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:39.817386  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:39.991060  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:40.019768  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:40.316818  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:40.373516  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:40.496013  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:40.518240  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:40.819294  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:40.990877  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:41.018125  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:41.317456  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:41.491943  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:41.518306  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:41.817101  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:41.991658  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:42.019192  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:42.317060  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:42.491542  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:42.522211  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:42.816315  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:42.873060  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:42.991566  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:43.020058  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:43.316817  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:43.492370  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:43.519458  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:43.816754  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:43.990850  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:44.018818  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:44.317327  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:44.491599  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:44.519663  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:44.816573  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:44.873370  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:44.991374  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:45.026682  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:45.316825  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:45.504364  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:45.524675  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:45.816468  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:45.992098  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:46.018952  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:46.317156  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:46.491666  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:46.521643  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:46.817104  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:46.873938  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:46.991758  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:47.020329  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:47.316613  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:47.491165  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:47.518735  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:47.816639  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:47.991389  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:48.018908  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:48.316876  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:48.494808  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:48.521775  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:48.818053  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:48.874070  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:48.991837  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:49.018963  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:49.316294  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:49.491276  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:49.519031  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:49.817216  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:49.991281  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:50.020189  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:50.323886  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:50.490792  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:50.519663  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:50.816525  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:50.886700  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:50.991866  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:51.018469  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:51.318093  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:51.491204  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:51.519200  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:51.816333  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:51.992521  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:52.019595  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:52.316238  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:52.491799  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:52.519130  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:52.816903  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:52.994621  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:53.018899  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:53.316844  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:53.374236  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:53.490854  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:53.518586  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:53.817016  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:53.991897  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:54.018407  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:54.316432  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:54.491871  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:54.518507  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:54.816284  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:54.994645  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:55.025520  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:55.317438  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:55.490997  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:55.519302  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:55.816782  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:55.873196  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:55.991305  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:56.022898  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:56.316341  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:56.491894  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:56.518223  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:56.817345  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:56.991241  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:57.018886  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:57.318940  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:57.491531  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:57.519013  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:57.816487  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:57.990893  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:58.019340  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:58.321735  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:58.373868  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:26:58.492523  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:58.518793  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:58.816210  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:58.994554  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:59.024476  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:59.317973  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:59.492732  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:26:59.519124  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:26:59.816732  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:26:59.991715  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:00.019579  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:00.317366  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:00.376753  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:00.491512  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:00.526240  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:00.817885  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:00.991422  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:01.021492  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:01.316991  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:01.491723  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:01.519304  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:01.817821  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:01.992362  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:02.019593  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:02.316533  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:02.380933  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:02.491314  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:02.519850  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:02.816442  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:02.991430  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:03.018891  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:03.317084  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:03.494802  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:03.521600  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:03.817132  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:03.991770  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:04.018298  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:04.325268  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:04.491822  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:04.532303  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:04.817504  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:04.875413  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:04.991984  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:05.021790  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:05.319805  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:05.492113  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:05.520404  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:05.817007  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:05.992345  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:06.025914  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:06.316498  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:06.763149  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:06.763298  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:06.817032  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:06.991512  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:07.019716  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:07.317372  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:07.373956  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:07.490788  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:07.519741  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:07.818021  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:07.992044  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:08.018671  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:08.317381  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:08.491870  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:08.518722  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:09.058934  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:09.059839  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:09.059865  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:09.317622  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:09.374308  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:09.491797  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:09.518208  742115 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 19:27:09.817349  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:09.991816  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:10.019678  742115 kapi.go:107] duration metric: took 1m13.505263544s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 19:27:10.317225  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:10.492317  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:10.816884  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:10.991652  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:11.317340  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:11.492005  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:11.817650  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:11.873707  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:11.992367  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:12.317210  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:12.491045  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:12.817487  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:12.991670  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:13.317248  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:13.491224  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 19:27:13.826933  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:13.993222  742115 kapi.go:107] duration metric: took 1m15.005652065s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 19:27:13.995078  742115 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-416933 cluster.
	I0729 19:27:13.996490  742115 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 19:27:13.997960  742115 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 19:27:14.317480  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:14.373733  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:14.817139  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:15.317397  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:15.817486  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:16.316768  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:16.816604  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:16.879423  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:17.316409  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:17.821427  742115 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 19:27:18.316207  742115 kapi.go:107] duration metric: took 1m20.505039384s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 19:27:18.318156  742115 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, nvidia-device-plugin, ingress-dns, metrics-server, helm-tiller, cloud-spanner, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0729 19:27:18.319353  742115 addons.go:510] duration metric: took 1m30.000885078s for enable addons: enabled=[default-storageclass storage-provisioner nvidia-device-plugin ingress-dns metrics-server helm-tiller cloud-spanner inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0729 19:27:19.374255  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:21.873593  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:23.874633  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:25.874876  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:28.373038  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:30.374226  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:32.374899  742115 pod_ready.go:102] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"False"
	I0729 19:27:34.874327  742115 pod_ready.go:92] pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace has status "Ready":"True"
	I0729 19:27:34.874353  742115 pod_ready.go:81] duration metric: took 1m34.506950702s for pod "metrics-server-c59844bb4-qpjzn" in "kube-system" namespace to be "Ready" ...
	I0729 19:27:34.874363  742115 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-qlk76" in "kube-system" namespace to be "Ready" ...
	I0729 19:27:34.878821  742115 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-qlk76" in "kube-system" namespace has status "Ready":"True"
	I0729 19:27:34.878840  742115 pod_ready.go:81] duration metric: took 4.470669ms for pod "nvidia-device-plugin-daemonset-qlk76" in "kube-system" namespace to be "Ready" ...
	I0729 19:27:34.878858  742115 pod_ready.go:38] duration metric: took 1m38.36413585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 19:27:34.878877  742115 api_server.go:52] waiting for apiserver process to appear ...
	I0729 19:27:34.878921  742115 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 19:27:34.878971  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 19:27:34.925012  742115 cri.go:89] found id: "2519d96fd281095978b5d123d97dc22d6457b171b083e70e489c2bd9285fc262"
	I0729 19:27:34.925039  742115 cri.go:89] found id: ""
	I0729 19:27:34.925050  742115 logs.go:276] 1 containers: [2519d96fd281095978b5d123d97dc22d6457b171b083e70e489c2bd9285fc262]
	I0729 19:27:34.925113  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:34.929082  742115 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 19:27:34.929149  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 19:27:34.981118  742115 cri.go:89] found id: "43a4a07ae7ca3473626d361debcede680ac648e65f1ed2c052a7e975dc88b011"
	I0729 19:27:34.981148  742115 cri.go:89] found id: ""
	I0729 19:27:34.981156  742115 logs.go:276] 1 containers: [43a4a07ae7ca3473626d361debcede680ac648e65f1ed2c052a7e975dc88b011]
	I0729 19:27:34.981211  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:34.985353  742115 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 19:27:34.985422  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 19:27:35.030847  742115 cri.go:89] found id: "a28536d320b1eb78d7b3724019961e9e7f79cee29582636ef3508a2bab439bc5"
	I0729 19:27:35.030870  742115 cri.go:89] found id: ""
	I0729 19:27:35.030878  742115 logs.go:276] 1 containers: [a28536d320b1eb78d7b3724019961e9e7f79cee29582636ef3508a2bab439bc5]
	I0729 19:27:35.030935  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:35.036097  742115 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 19:27:35.036178  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 19:27:35.079367  742115 cri.go:89] found id: "861bdf7e09fb597f5008d2459e68049ecf220b1490b731a10d06310654f04b4e"
	I0729 19:27:35.079396  742115 cri.go:89] found id: ""
	I0729 19:27:35.079407  742115 logs.go:276] 1 containers: [861bdf7e09fb597f5008d2459e68049ecf220b1490b731a10d06310654f04b4e]
	I0729 19:27:35.079474  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:35.083877  742115 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 19:27:35.083943  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 19:27:35.128043  742115 cri.go:89] found id: "dd8494e35ec96ae79a5b05aab043d5f7a2fff36780bac2fbd1eb58e4a87f2832"
	I0729 19:27:35.128077  742115 cri.go:89] found id: ""
	I0729 19:27:35.128088  742115 logs.go:276] 1 containers: [dd8494e35ec96ae79a5b05aab043d5f7a2fff36780bac2fbd1eb58e4a87f2832]
	I0729 19:27:35.128153  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:35.132049  742115 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 19:27:35.132120  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 19:27:35.170761  742115 cri.go:89] found id: "6ef431e352a3e08aa1e4fedcaf398c4d87f3d9ab704c6ab8cc249a272f9b48d2"
	I0729 19:27:35.170787  742115 cri.go:89] found id: ""
	I0729 19:27:35.170797  742115 logs.go:276] 1 containers: [6ef431e352a3e08aa1e4fedcaf398c4d87f3d9ab704c6ab8cc249a272f9b48d2]
	I0729 19:27:35.170867  742115 ssh_runner.go:195] Run: which crictl
	I0729 19:27:35.174705  742115 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 19:27:35.174767  742115 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 19:27:35.214504  742115 cri.go:89] found id: ""
	I0729 19:27:35.214533  742115 logs.go:276] 0 containers: []
	W0729 19:27:35.214544  742115 logs.go:278] No container was found matching "kindnet"
	I0729 19:27:35.214562  742115 logs.go:123] Gathering logs for describe nodes ...
	I0729 19:27:35.214580  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 19:27:35.346515  742115 logs.go:123] Gathering logs for etcd [43a4a07ae7ca3473626d361debcede680ac648e65f1ed2c052a7e975dc88b011] ...
	I0729 19:27:35.346553  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43a4a07ae7ca3473626d361debcede680ac648e65f1ed2c052a7e975dc88b011"
	I0729 19:27:35.414992  742115 logs.go:123] Gathering logs for kube-proxy [dd8494e35ec96ae79a5b05aab043d5f7a2fff36780bac2fbd1eb58e4a87f2832] ...
	I0729 19:27:35.415032  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd8494e35ec96ae79a5b05aab043d5f7a2fff36780bac2fbd1eb58e4a87f2832"
	I0729 19:27:35.451345  742115 logs.go:123] Gathering logs for dmesg ...
	I0729 19:27:35.451380  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 19:27:35.465524  742115 logs.go:123] Gathering logs for kube-apiserver [2519d96fd281095978b5d123d97dc22d6457b171b083e70e489c2bd9285fc262] ...
	I0729 19:27:35.465569  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2519d96fd281095978b5d123d97dc22d6457b171b083e70e489c2bd9285fc262"
	I0729 19:27:35.539193  742115 logs.go:123] Gathering logs for coredns [a28536d320b1eb78d7b3724019961e9e7f79cee29582636ef3508a2bab439bc5] ...
	I0729 19:27:35.539228  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a28536d320b1eb78d7b3724019961e9e7f79cee29582636ef3508a2bab439bc5"
	I0729 19:27:35.579906  742115 logs.go:123] Gathering logs for kube-scheduler [861bdf7e09fb597f5008d2459e68049ecf220b1490b731a10d06310654f04b4e] ...
	I0729 19:27:35.579942  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 861bdf7e09fb597f5008d2459e68049ecf220b1490b731a10d06310654f04b4e"
	I0729 19:27:35.680959  742115 logs.go:123] Gathering logs for kube-controller-manager [6ef431e352a3e08aa1e4fedcaf398c4d87f3d9ab704c6ab8cc249a272f9b48d2] ...
	I0729 19:27:35.680992  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ef431e352a3e08aa1e4fedcaf398c4d87f3d9ab704c6ab8cc249a272f9b48d2"
	I0729 19:27:35.747663  742115 logs.go:123] Gathering logs for CRI-O ...
	I0729 19:27:35.747704  742115 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"

                                                
                                                
** /stderr **
addons_test.go:112: out/minikube-linux-amd64 start -p addons-416933 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller failed: signal: killed
--- FAIL: TestAddons/Setup (2400.06s)

                                                
                                    
x
+
TestCertExpiration (1090.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-461577 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-461577 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (43.596903936s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-461577 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p cert-expiration-461577 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: exit status 109 (14m24.946846851s)

                                                
                                                
-- stdout --
	* [cert-expiration-461577] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "cert-expiration-461577" primary control-plane node in "cert-expiration-461577" cluster
	* Updating the running kvm2 "cert-expiration-461577" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Certificate client.crt has expired. Generating a new one...
	! Certificate apiserver.crt.3c97931c has expired. Generating a new one...
	! Certificate proxy-client.crt has expired. Generating a new one...
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.800496ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000282741s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.480639ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.006421971s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.480639ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.006421971s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-linux-amd64 start -p cert-expiration-461577 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio" : exit status 109
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-29 21:19:51.571783228 +0000 UTC m=+6995.740490413
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-461577 -n cert-expiration-461577
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p cert-expiration-461577 -n cert-expiration-461577: exit status 2 (219.633384ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-expiration-461577 logs -n 25
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-404553 sudo cat                              | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo cat                              | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo cat                              | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo                                  | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo find                             | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-404553 sudo crio                             | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-404553                                       | bridge-404553          | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:10 UTC |
	| start   | -p embed-certs-852252                                  | embed-certs-852252     | jenkins | v1.33.1 | 29 Jul 24 21:10 UTC | 29 Jul 24 21:11 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-852252            | embed-certs-852252     | jenkins | v1.33.1 | 29 Jul 24 21:12 UTC | 29 Jul 24 21:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-852252                                  | embed-certs-852252     | jenkins | v1.33.1 | 29 Jul 24 21:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-497952             | no-preload-497952      | jenkins | v1.33.1 | 29 Jul 24 21:12 UTC | 29 Jul 24 21:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-497952                                   | no-preload-497952      | jenkins | v1.33.1 | 29 Jul 24 21:12 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-813126        | old-k8s-version-813126 | jenkins | v1.33.1 | 29 Jul 24 21:13 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-852252                 | embed-certs-852252     | jenkins | v1.33.1 | 29 Jul 24 21:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-852252                                  | embed-certs-852252     | jenkins | v1.33.1 | 29 Jul 24 21:14 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-497952                  | no-preload-497952      | jenkins | v1.33.1 | 29 Jul 24 21:14 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-497952 --memory=2200                     | no-preload-497952      | jenkins | v1.33.1 | 29 Jul 24 21:14 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                        |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-813126                              | old-k8s-version-813126 | jenkins | v1.33.1 | 29 Jul 24 21:15 UTC | 29 Jul 24 21:15 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-813126             | old-k8s-version-813126 | jenkins | v1.33.1 | 29 Jul 24 21:15 UTC | 29 Jul 24 21:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-813126                              | old-k8s-version-813126 | jenkins | v1.33.1 | 29 Jul 24 21:15 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=kvm2                                          |                        |         |         |                     |                     |
	|         | --container-runtime=crio                               |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 21:15:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 21:15:33.244920  803586 out.go:291] Setting OutFile to fd 1 ...
	I0729 21:15:33.245054  803586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:15:33.245066  803586 out.go:304] Setting ErrFile to fd 2...
	I0729 21:15:33.245073  803586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:15:33.245249  803586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 21:15:33.245815  803586 out.go:298] Setting JSON to false
	I0729 21:15:33.246770  803586 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":17880,"bootTime":1722269853,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 21:15:33.246828  803586 start.go:139] virtualization: kvm guest
	I0729 21:15:33.248995  803586 out.go:177] * [old-k8s-version-813126] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 21:15:33.250240  803586 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 21:15:33.250244  803586 notify.go:220] Checking for updates...
	I0729 21:15:33.251737  803586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 21:15:33.253083  803586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:15:33.254349  803586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:15:33.255572  803586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 21:15:33.256823  803586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 21:15:33.258336  803586 config.go:182] Loaded profile config "old-k8s-version-813126": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 21:15:33.258776  803586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:15:33.258830  803586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:15:33.275318  803586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43425
	I0729 21:15:33.275715  803586 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:15:33.276244  803586 main.go:141] libmachine: Using API Version  1
	I0729 21:15:33.276269  803586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:15:33.276668  803586 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:15:33.276856  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .DriverName
	I0729 21:15:33.278616  803586 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 21:15:33.279829  803586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 21:15:33.280166  803586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:15:33.280202  803586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:15:33.294796  803586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0729 21:15:33.295179  803586 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:15:33.295622  803586 main.go:141] libmachine: Using API Version  1
	I0729 21:15:33.295640  803586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:15:33.295960  803586 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:15:33.296180  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .DriverName
	I0729 21:15:33.331695  803586 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 21:15:33.332969  803586 start.go:297] selected driver: kvm2
	I0729 21:15:33.333003  803586 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-813126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813126 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280
h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:15:33.333118  803586 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 21:15:33.333772  803586 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:15:33.333855  803586 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 21:15:33.348844  803586 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 21:15:33.349306  803586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:15:33.349346  803586 cni.go:84] Creating CNI manager for ""
	I0729 21:15:33.349363  803586 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:15:33.349432  803586 start.go:340] cluster config:
	{Name:old-k8s-version-813126 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-813126 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.69 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:15:33.349569  803586 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:15:33.351365  803586 out.go:177] * Starting "old-k8s-version-813126" primary control-plane node in "old-k8s-version-813126" cluster
	I0729 21:15:33.352632  803586 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 21:15:33.352680  803586 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 21:15:33.352688  803586 cache.go:56] Caching tarball of preloaded images
	I0729 21:15:33.352769  803586 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 21:15:33.352779  803586 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 21:15:33.352889  803586 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/old-k8s-version-813126/config.json ...
	I0729 21:15:33.353066  803586 start.go:360] acquireMachinesLock for old-k8s-version-813126: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:15:36.612363  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:15:39.684373  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:15:44.293663  788724 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.000282741s
	I0729 21:15:44.293693  788724 kubeadm.go:310] 
	I0729 21:15:44.293726  788724 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 21:15:44.293749  788724 kubeadm.go:310] 	context deadline exceeded
	I0729 21:15:44.293752  788724 kubeadm.go:310] 
	I0729 21:15:44.293777  788724 kubeadm.go:310] This error is likely caused by:
	I0729 21:15:44.293834  788724 kubeadm.go:310] 	- The kubelet is not running
	I0729 21:15:44.293932  788724 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 21:15:44.293937  788724 kubeadm.go:310] 
	I0729 21:15:44.294014  788724 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 21:15:44.294045  788724 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 21:15:44.294069  788724 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 21:15:44.294073  788724 kubeadm.go:310] 
	I0729 21:15:44.294157  788724 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 21:15:44.294222  788724 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 21:15:44.294294  788724 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 21:15:44.294411  788724 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 21:15:44.294489  788724 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 21:15:44.294552  788724 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0729 21:15:44.295224  788724 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 21:15:44.295307  788724 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 21:15:44.295399  788724 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 21:15:44.295570  788724 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.800496ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.000282741s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 21:15:44.295630  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 21:15:45.764327  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:15:48.840327  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:15:48.970955  788724 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.675290284s)
	I0729 21:15:48.971040  788724 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 21:15:48.985692  788724 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 21:15:48.994923  788724 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 21:15:48.994934  788724 kubeadm.go:157] found existing configuration files:
	
	I0729 21:15:48.994982  788724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 21:15:49.003542  788724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 21:15:49.003601  788724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 21:15:49.012614  788724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 21:15:49.021176  788724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 21:15:49.021217  788724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 21:15:49.030215  788724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 21:15:49.038725  788724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 21:15:49.038782  788724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 21:15:49.047735  788724 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 21:15:49.056384  788724 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 21:15:49.056435  788724 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 21:15:49.065120  788724 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 21:15:49.238976  788724 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 21:15:54.916348  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:15:57.988367  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:04.068324  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:07.140337  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:13.220370  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:16.292340  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:22.372327  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:25.444334  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:31.524381  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:34.596428  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:40.676293  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:43.748347  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:49.828364  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:52.900328  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:16:58.980314  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:02.052388  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:08.132279  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:11.204402  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:17.284356  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:20.356272  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:26.436309  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:29.508273  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:35.588387  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:38.660309  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:44.740334  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:47.812407  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:53.892329  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:17:56.964294  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:03.044286  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:06.116442  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:12.196334  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:15.268414  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:21.348322  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:24.420305  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:30.500296  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:33.572363  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:39.652370  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:42.724342  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:48.804400  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:51.876298  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:18:57.956325  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:19:01.028332  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:19:07.108271  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:19:10.180313  803114 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.222:22: connect: no route to host
	I0729 21:19:13.184477  803242 start.go:364] duration metric: took 4m27.72135189s to acquireMachinesLock for "no-preload-497952"
	I0729 21:19:13.184578  803242 start.go:96] Skipping create...Using existing machine configuration
	I0729 21:19:13.184588  803242 fix.go:54] fixHost starting: 
	I0729 21:19:13.185102  803242 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:19:13.185148  803242 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:19:13.201176  803242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37097
	I0729 21:19:13.201746  803242 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:19:13.202286  803242 main.go:141] libmachine: Using API Version  1
	I0729 21:19:13.202327  803242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:19:13.202753  803242 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:19:13.202986  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:13.203165  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetState
	I0729 21:19:13.204764  803242 fix.go:112] recreateIfNeeded on no-preload-497952: state=Stopped err=<nil>
	I0729 21:19:13.204788  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	W0729 21:19:13.204958  803242 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 21:19:13.206941  803242 out.go:177] * Restarting existing kvm2 VM for "no-preload-497952" ...
	I0729 21:19:13.208307  803242 main.go:141] libmachine: (no-preload-497952) Calling .Start
	I0729 21:19:13.208504  803242 main.go:141] libmachine: (no-preload-497952) Ensuring networks are active...
	I0729 21:19:13.209307  803242 main.go:141] libmachine: (no-preload-497952) Ensuring network default is active
	I0729 21:19:13.209683  803242 main.go:141] libmachine: (no-preload-497952) Ensuring network mk-no-preload-497952 is active
	I0729 21:19:13.210056  803242 main.go:141] libmachine: (no-preload-497952) Getting domain xml...
	I0729 21:19:13.210836  803242 main.go:141] libmachine: (no-preload-497952) Creating domain...
	I0729 21:19:14.422863  803242 main.go:141] libmachine: (no-preload-497952) Waiting to get IP...
	I0729 21:19:14.423720  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:14.424192  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:14.424290  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:14.424165  804789 retry.go:31] will retry after 234.113791ms: waiting for machine to come up
	I0729 21:19:14.659780  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:14.660397  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:14.660426  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:14.660355  804789 retry.go:31] will retry after 365.586573ms: waiting for machine to come up
	I0729 21:19:15.028131  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:15.028645  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:15.028673  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:15.028604  804789 retry.go:31] will retry after 477.042387ms: waiting for machine to come up
	I0729 21:19:13.182078  803114 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 21:19:13.182128  803114 main.go:141] libmachine: (embed-certs-852252) Calling .GetMachineName
	I0729 21:19:13.182476  803114 buildroot.go:166] provisioning hostname "embed-certs-852252"
	I0729 21:19:13.182504  803114 main.go:141] libmachine: (embed-certs-852252) Calling .GetMachineName
	I0729 21:19:13.182725  803114 main.go:141] libmachine: (embed-certs-852252) Calling .GetSSHHostname
	I0729 21:19:13.184333  803114 machine.go:97] duration metric: took 4m37.423743942s to provisionDockerMachine
	I0729 21:19:13.184385  803114 fix.go:56] duration metric: took 4m37.44571504s for fixHost
	I0729 21:19:13.184391  803114 start.go:83] releasing machines lock for "embed-certs-852252", held for 4m37.445739586s
	W0729 21:19:13.184416  803114 start.go:714] error starting host: provision: host is not running
	W0729 21:19:13.184633  803114 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0729 21:19:13.184645  803114 start.go:729] Will try again in 5 seconds ...
	I0729 21:19:15.507314  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:15.507933  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:15.507961  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:15.507881  804789 retry.go:31] will retry after 564.172317ms: waiting for machine to come up
	I0729 21:19:16.073205  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:16.073792  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:16.073852  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:16.073729  804789 retry.go:31] will retry after 723.659896ms: waiting for machine to come up
	I0729 21:19:16.798615  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:16.799054  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:16.799083  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:16.799001  804789 retry.go:31] will retry after 942.283724ms: waiting for machine to come up
	I0729 21:19:17.743094  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:17.743583  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:17.743607  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:17.743530  804789 retry.go:31] will retry after 1.13102542s: waiting for machine to come up
	I0729 21:19:18.876054  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:18.876533  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:18.876572  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:18.876497  804789 retry.go:31] will retry after 959.950873ms: waiting for machine to come up
	I0729 21:19:19.837684  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:19.838282  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:19.838319  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:19.838212  804789 retry.go:31] will retry after 1.523418759s: waiting for machine to come up
	I0729 21:19:18.186433  803114 start.go:360] acquireMachinesLock for embed-certs-852252: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:19:21.363962  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:21.364416  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:21.364445  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:21.364366  804789 retry.go:31] will retry after 1.398092407s: waiting for machine to come up
	I0729 21:19:22.764346  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:22.764797  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:22.764828  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:22.764751  804789 retry.go:31] will retry after 2.038288228s: waiting for machine to come up
	I0729 21:19:24.805343  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:24.805821  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:24.805852  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:24.805788  804789 retry.go:31] will retry after 3.527870753s: waiting for machine to come up
	I0729 21:19:28.335299  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:28.335750  803242 main.go:141] libmachine: (no-preload-497952) DBG | unable to find current IP address of domain no-preload-497952 in network mk-no-preload-497952
	I0729 21:19:28.335779  803242 main.go:141] libmachine: (no-preload-497952) DBG | I0729 21:19:28.335702  804789 retry.go:31] will retry after 4.404822002s: waiting for machine to come up
	I0729 21:19:34.356444  803586 start.go:364] duration metric: took 4m1.003342595s to acquireMachinesLock for "old-k8s-version-813126"
	I0729 21:19:34.356518  803586 start.go:96] Skipping create...Using existing machine configuration
	I0729 21:19:34.356530  803586 fix.go:54] fixHost starting: 
	I0729 21:19:34.356954  803586 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:19:34.356990  803586 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:19:34.374877  803586 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46455
	I0729 21:19:34.375326  803586 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:19:34.375840  803586 main.go:141] libmachine: Using API Version  1
	I0729 21:19:34.375863  803586 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:19:34.376277  803586 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:19:34.376462  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .DriverName
	I0729 21:19:34.376638  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .GetState
	I0729 21:19:34.378276  803586 fix.go:112] recreateIfNeeded on old-k8s-version-813126: state=Stopped err=<nil>
	I0729 21:19:34.378355  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .DriverName
	W0729 21:19:34.378529  803586 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 21:19:34.380956  803586 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-813126" ...
	I0729 21:19:32.745233  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.745741  803242 main.go:141] libmachine: (no-preload-497952) Found IP for machine: 192.168.39.177
	I0729 21:19:32.745765  803242 main.go:141] libmachine: (no-preload-497952) Reserving static IP address...
	I0729 21:19:32.745777  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has current primary IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.746249  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "no-preload-497952", mac: "52:54:00:ef:27:ad", ip: "192.168.39.177"} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:32.746268  803242 main.go:141] libmachine: (no-preload-497952) Reserved static IP address: 192.168.39.177
	I0729 21:19:32.746281  803242 main.go:141] libmachine: (no-preload-497952) DBG | skip adding static IP to network mk-no-preload-497952 - found existing host DHCP lease matching {name: "no-preload-497952", mac: "52:54:00:ef:27:ad", ip: "192.168.39.177"}
	I0729 21:19:32.746295  803242 main.go:141] libmachine: (no-preload-497952) DBG | Getting to WaitForSSH function...
	I0729 21:19:32.746307  803242 main.go:141] libmachine: (no-preload-497952) Waiting for SSH to be available...
	I0729 21:19:32.748398  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.748723  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:32.748764  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.748837  803242 main.go:141] libmachine: (no-preload-497952) DBG | Using SSH client type: external
	I0729 21:19:32.748883  803242 main.go:141] libmachine: (no-preload-497952) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa (-rw-------)
	I0729 21:19:32.748924  803242 main.go:141] libmachine: (no-preload-497952) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 21:19:32.748936  803242 main.go:141] libmachine: (no-preload-497952) DBG | About to run SSH command:
	I0729 21:19:32.748946  803242 main.go:141] libmachine: (no-preload-497952) DBG | exit 0
	I0729 21:19:32.872224  803242 main.go:141] libmachine: (no-preload-497952) DBG | SSH cmd err, output: <nil>: 
	I0729 21:19:32.872649  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetConfigRaw
	I0729 21:19:32.873303  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetIP
	I0729 21:19:32.876112  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.876571  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:32.876598  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.876922  803242 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952/config.json ...
	I0729 21:19:32.877158  803242 machine.go:94] provisionDockerMachine start ...
	I0729 21:19:32.877182  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:32.877418  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:32.879715  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.880024  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:32.880070  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.880214  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:32.880386  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:32.880538  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:32.880695  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:32.880912  803242 main.go:141] libmachine: Using SSH client type: native
	I0729 21:19:32.881138  803242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0729 21:19:32.881155  803242 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 21:19:32.984167  803242 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 21:19:32.984220  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetMachineName
	I0729 21:19:32.984547  803242 buildroot.go:166] provisioning hostname "no-preload-497952"
	I0729 21:19:32.984580  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetMachineName
	I0729 21:19:32.984806  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:32.987845  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.988327  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:32.988355  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:32.988533  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:32.988726  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:32.988898  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:32.989050  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:32.989218  803242 main.go:141] libmachine: Using SSH client type: native
	I0729 21:19:32.989387  803242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0729 21:19:32.989399  803242 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-497952 && echo "no-preload-497952" | sudo tee /etc/hostname
	I0729 21:19:33.106560  803242 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-497952
	
	I0729 21:19:33.106594  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:33.109580  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.109929  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:33.109959  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.110165  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:33.110389  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:33.110547  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:33.110840  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:33.110997  803242 main.go:141] libmachine: Using SSH client type: native
	I0729 21:19:33.111159  803242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0729 21:19:33.111175  803242 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-497952' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-497952/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-497952' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 21:19:33.224132  803242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 21:19:33.224166  803242 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 21:19:33.224191  803242 buildroot.go:174] setting up certificates
	I0729 21:19:33.224207  803242 provision.go:84] configureAuth start
	I0729 21:19:33.224218  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetMachineName
	I0729 21:19:33.224500  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetIP
	I0729 21:19:33.227346  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.227788  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:33.227811  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.228001  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:33.230398  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.230706  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:33.230738  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.230897  803242 provision.go:143] copyHostCerts
	I0729 21:19:33.230988  803242 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 21:19:33.231012  803242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 21:19:33.231105  803242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 21:19:33.231238  803242 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 21:19:33.231252  803242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 21:19:33.231323  803242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 21:19:33.231435  803242 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 21:19:33.231448  803242 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 21:19:33.231491  803242 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 21:19:33.231557  803242 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.no-preload-497952 san=[127.0.0.1 192.168.39.177 localhost minikube no-preload-497952]
	I0729 21:19:33.720315  803242 provision.go:177] copyRemoteCerts
	I0729 21:19:33.720394  803242 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 21:19:33.720424  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:33.723500  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.723886  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:33.723910  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.724130  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:33.724341  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:33.724536  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:33.724678  803242 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa Username:docker}
	I0729 21:19:33.805459  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 21:19:33.827027  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 21:19:33.847673  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 21:19:33.869167  803242 provision.go:87] duration metric: took 644.945751ms to configureAuth
	I0729 21:19:33.869194  803242 buildroot.go:189] setting minikube options for container-runtime
	I0729 21:19:33.869395  803242 config.go:182] Loaded profile config "no-preload-497952": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 21:19:33.869489  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:33.872350  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.872730  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:33.872754  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:33.872933  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:33.873166  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:33.873299  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:33.873406  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:33.873556  803242 main.go:141] libmachine: Using SSH client type: native
	I0729 21:19:33.873756  803242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0729 21:19:33.873772  803242 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 21:19:34.133499  803242 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 21:19:34.133532  803242 machine.go:97] duration metric: took 1.256356574s to provisionDockerMachine
	I0729 21:19:34.133546  803242 start.go:293] postStartSetup for "no-preload-497952" (driver="kvm2")
	I0729 21:19:34.133564  803242 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 21:19:34.133584  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:34.133917  803242 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 21:19:34.133952  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:34.137223  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.137722  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:34.137755  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.137862  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:34.138064  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:34.138244  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:34.138413  803242 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa Username:docker}
	I0729 21:19:34.218366  803242 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 21:19:34.222183  803242 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 21:19:34.222206  803242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 21:19:34.222281  803242 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 21:19:34.222363  803242 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 21:19:34.222448  803242 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 21:19:34.230988  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 21:19:34.251767  803242 start.go:296] duration metric: took 118.20426ms for postStartSetup
	I0729 21:19:34.251813  803242 fix.go:56] duration metric: took 21.067225256s for fixHost
	I0729 21:19:34.251834  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:34.254533  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.255002  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:34.255048  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.255240  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:34.255461  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:34.255652  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:34.255779  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:34.255920  803242 main.go:141] libmachine: Using SSH client type: native
	I0729 21:19:34.256190  803242 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I0729 21:19:34.256204  803242 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 21:19:34.356249  803242 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722287974.335114973
	
	I0729 21:19:34.356277  803242 fix.go:216] guest clock: 1722287974.335114973
	I0729 21:19:34.356287  803242 fix.go:229] Guest: 2024-07-29 21:19:34.335114973 +0000 UTC Remote: 2024-07-29 21:19:34.251816967 +0000 UTC m=+288.928080719 (delta=83.298006ms)
	I0729 21:19:34.356331  803242 fix.go:200] guest clock delta is within tolerance: 83.298006ms
	I0729 21:19:34.356340  803242 start.go:83] releasing machines lock for "no-preload-497952", held for 21.171806713s
	I0729 21:19:34.356374  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:34.356697  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetIP
	I0729 21:19:34.359757  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.360175  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:34.360205  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.360373  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:34.360871  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:34.361110  803242 main.go:141] libmachine: (no-preload-497952) Calling .DriverName
	I0729 21:19:34.361215  803242 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 21:19:34.361281  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:34.361408  803242 ssh_runner.go:195] Run: cat /version.json
	I0729 21:19:34.361437  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHHostname
	I0729 21:19:34.364133  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.364161  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.364580  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:34.364616  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.364641  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:34.364656  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:34.364768  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:34.364870  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHPort
	I0729 21:19:34.364975  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:34.365060  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHKeyPath
	I0729 21:19:34.365136  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:34.365187  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetSSHUsername
	I0729 21:19:34.365305  803242 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa Username:docker}
	I0729 21:19:34.365351  803242 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/no-preload-497952/id_rsa Username:docker}
	I0729 21:19:34.440557  803242 ssh_runner.go:195] Run: systemctl --version
	I0729 21:19:34.470120  803242 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 21:19:34.617915  803242 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 21:19:34.623516  803242 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 21:19:34.623579  803242 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 21:19:34.642894  803242 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 21:19:34.642920  803242 start.go:495] detecting cgroup driver to use...
	I0729 21:19:34.642994  803242 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 21:19:34.661406  803242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 21:19:34.676498  803242 docker.go:216] disabling cri-docker service (if available) ...
	I0729 21:19:34.676570  803242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 21:19:34.689177  803242 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 21:19:34.702067  803242 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 21:19:34.807366  803242 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 21:19:34.943440  803242 docker.go:232] disabling docker service ...
	I0729 21:19:34.943510  803242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 21:19:34.957675  803242 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 21:19:34.970476  803242 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 21:19:35.120753  803242 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 21:19:35.252934  803242 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 21:19:35.265833  803242 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 21:19:35.282618  803242 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0729 21:19:35.282695  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.292067  803242 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 21:19:35.292137  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.301342  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.310560  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.319665  803242 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 21:19:35.329190  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.338589  803242 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.354260  803242 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:19:35.365264  803242 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 21:19:35.373973  803242 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 21:19:35.374022  803242 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 21:19:35.386530  803242 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 21:19:35.395376  803242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:19:35.535734  803242 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 21:19:35.669293  803242 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 21:19:35.669389  803242 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 21:19:35.673806  803242 start.go:563] Will wait 60s for crictl version
	I0729 21:19:35.673874  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:35.677840  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 21:19:35.720252  803242 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 21:19:35.720396  803242 ssh_runner.go:195] Run: crio --version
	I0729 21:19:35.746345  803242 ssh_runner.go:195] Run: crio --version
	I0729 21:19:35.773348  803242 out.go:177] * Preparing Kubernetes v1.31.0-beta.0 on CRI-O 1.29.1 ...
	I0729 21:19:34.382242  803586 main.go:141] libmachine: (old-k8s-version-813126) Calling .Start
	I0729 21:19:34.382434  803586 main.go:141] libmachine: (old-k8s-version-813126) Ensuring networks are active...
	I0729 21:19:34.383236  803586 main.go:141] libmachine: (old-k8s-version-813126) Ensuring network default is active
	I0729 21:19:34.383697  803586 main.go:141] libmachine: (old-k8s-version-813126) Ensuring network mk-old-k8s-version-813126 is active
	I0729 21:19:34.384131  803586 main.go:141] libmachine: (old-k8s-version-813126) Getting domain xml...
	I0729 21:19:34.385199  803586 main.go:141] libmachine: (old-k8s-version-813126) Creating domain...
	I0729 21:19:35.676488  803586 main.go:141] libmachine: (old-k8s-version-813126) Waiting to get IP...
	I0729 21:19:35.677538  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:35.678141  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:35.678311  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:35.678115  804921 retry.go:31] will retry after 279.410403ms: waiting for machine to come up
	I0729 21:19:35.959615  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:35.960139  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:35.960164  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:35.960110  804921 retry.go:31] will retry after 268.467593ms: waiting for machine to come up
	I0729 21:19:36.230860  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:36.231651  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:36.231681  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:36.231569  804921 retry.go:31] will retry after 450.28279ms: waiting for machine to come up
	I0729 21:19:36.683327  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:36.683836  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:36.683870  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:36.683780  804921 retry.go:31] will retry after 411.980071ms: waiting for machine to come up
	I0729 21:19:37.097617  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:37.098211  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:37.098243  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:37.098153  804921 retry.go:31] will retry after 459.976555ms: waiting for machine to come up
	I0729 21:19:37.560205  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:37.560751  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:37.560783  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:37.560697  804921 retry.go:31] will retry after 817.783628ms: waiting for machine to come up
	I0729 21:19:35.774713  803242 main.go:141] libmachine: (no-preload-497952) Calling .GetIP
	I0729 21:19:35.777931  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:35.778287  803242 main.go:141] libmachine: (no-preload-497952) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ef:27:ad", ip: ""} in network mk-no-preload-497952: {Iface:virbr1 ExpiryTime:2024-07-29 22:10:30 +0000 UTC Type:0 Mac:52:54:00:ef:27:ad Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:no-preload-497952 Clientid:01:52:54:00:ef:27:ad}
	I0729 21:19:35.778323  803242 main.go:141] libmachine: (no-preload-497952) DBG | domain no-preload-497952 has defined IP address 192.168.39.177 and MAC address 52:54:00:ef:27:ad in network mk-no-preload-497952
	I0729 21:19:35.778535  803242 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 21:19:35.782243  803242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 21:19:35.796750  803242 kubeadm.go:883] updating cluster {Name:no-preload-497952 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0-beta.0 ClusterName:no-preload-497952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 21:19:35.796923  803242 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 21:19:35.796966  803242 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 21:19:35.830928  803242 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.0-beta.0". assuming images are not preloaded.
	I0729 21:19:35.830963  803242 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.0-beta.0 registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 registry.k8s.io/kube-scheduler:v1.31.0-beta.0 registry.k8s.io/kube-proxy:v1.31.0-beta.0 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.14-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 21:19:35.831035  803242 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:19:35.831046  803242 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 21:19:35.831073  803242 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 21:19:35.831042  803242 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0729 21:19:35.831096  803242 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 21:19:35.831103  803242 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 21:19:35.831057  803242 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0729 21:19:35.831270  803242 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 21:19:35.832921  803242 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 21:19:35.832974  803242 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 21:19:35.833022  803242 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 21:19:35.833029  803242 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:19:35.832921  803242 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0729 21:19:35.833118  803242 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 21:19:35.833023  803242 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0729 21:19:35.833056  803242 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 21:19:36.073112  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 21:19:36.096850  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I0729 21:19:36.097377  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 21:19:36.099168  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.14-0
	I0729 21:19:36.112128  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 21:19:36.123153  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0729 21:19:36.140572  803242 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.31.0-beta.0" does not exist at hash "c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899" in container runtime
	I0729 21:19:36.140619  803242 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 21:19:36.140671  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.201728  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 21:19:36.324651  803242 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" does not exist at hash "f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938" in container runtime
	I0729 21:19:36.324709  803242 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 21:19:36.324771  803242 cache_images.go:116] "registry.k8s.io/etcd:3.5.14-0" needs transfer: "registry.k8s.io/etcd:3.5.14-0" does not exist at hash "cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa" in container runtime
	I0729 21:19:36.324809  803242 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.14-0
	I0729 21:19:36.324843  803242 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" does not exist at hash "63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5" in container runtime
	I0729 21:19:36.324780  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.324874  803242 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 21:19:36.324876  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.324918  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.324947  803242 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0729 21:19:36.324981  803242 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0729 21:19:36.324996  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0729 21:19:36.325011  803242 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" does not exist at hash "d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b" in container runtime
	I0729 21:19:36.325046  803242 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 21:19:36.325022  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.325085  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:36.329216  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0729 21:19:36.337238  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0729 21:19:36.337412  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0729 21:19:36.337495  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.14-0
	I0729 21:19:36.437909  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0729 21:19:36.437961  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0729 21:19:36.437970  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0729 21:19:36.437983  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0729 21:19:36.438033  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 21:19:36.438059  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0729 21:19:36.438064  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 21:19:36.438041  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 21:19:36.438118  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0729 21:19:36.438123  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0
	I0729 21:19:36.438182  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.14-0
	I0729 21:19:36.446982  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0 (exists)
	I0729 21:19:36.447007  803242 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 21:19:36.447058  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0
	I0729 21:19:36.485463  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0 (exists)
	I0729 21:19:36.485524  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0 (exists)
	I0729 21:19:36.485539  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.14-0 (exists)
	I0729 21:19:36.485563  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0729 21:19:36.485627  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0729 21:19:36.485749  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 21:19:37.115658  803242 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:19:38.527279  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.0-beta.0: (2.080184185s)
	I0729 21:19:38.527342  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 from cache
	I0729 21:19:38.527362  803242 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (2.041583011s)
	I0729 21:19:38.527403  803242 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.411710277s)
	I0729 21:19:38.527444  803242 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0729 21:19:38.527406  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0 (exists)
	I0729 21:19:38.527371  803242 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 21:19:38.527475  803242 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:19:38.527534  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0
	I0729 21:19:38.527570  803242 ssh_runner.go:195] Run: which crictl
	I0729 21:19:38.379830  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:38.380469  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:38.380499  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:38.380421  804921 retry.go:31] will retry after 965.30779ms: waiting for machine to come up
	I0729 21:19:39.347197  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:39.347717  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:39.347750  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:39.347663  804921 retry.go:31] will retry after 1.071692164s: waiting for machine to come up
	I0729 21:19:40.421245  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:40.421742  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:40.421776  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:40.421685  804921 retry.go:31] will retry after 1.327079632s: waiting for machine to come up
	I0729 21:19:41.751272  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:41.751789  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:41.751827  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:41.751744  804921 retry.go:31] will retry after 2.065937991s: waiting for machine to come up
	I0729 21:19:40.403328  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.0-beta.0: (1.875766323s)
	I0729 21:19:40.403367  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 from cache
	I0729 21:19:40.403400  803242 ssh_runner.go:235] Completed: which crictl: (1.875806075s)
	I0729 21:19:40.403418  803242 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 21:19:40.403480  803242 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:19:40.403482  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0
	I0729 21:19:42.373253  803242 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.969738494s)
	I0729 21:19:42.373303  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.0-beta.0: (1.969731914s)
	I0729 21:19:42.373328  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 from cache
	I0729 21:19:42.373338  803242 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0729 21:19:42.373356  803242 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.14-0
	I0729 21:19:42.373425  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0
	I0729 21:19:42.373439  803242 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0729 21:19:43.819965  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:43.820566  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:43.820598  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:43.820488  804921 retry.go:31] will retry after 1.980353388s: waiting for machine to come up
	I0729 21:19:45.802594  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | domain old-k8s-version-813126 has defined MAC address 52:54:00:bb:ef:8d in network mk-old-k8s-version-813126
	I0729 21:19:45.802965  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | unable to find current IP address of domain old-k8s-version-813126 in network mk-old-k8s-version-813126
	I0729 21:19:45.802987  803586 main.go:141] libmachine: (old-k8s-version-813126) DBG | I0729 21:19:45.802933  804921 retry.go:31] will retry after 3.035931504s: waiting for machine to come up
	I0729 21:19:45.663748  803242 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (3.290284576s)
	I0729 21:19:45.663788  803242 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0729 21:19:45.663806  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.14-0: (3.290353512s)
	I0729 21:19:45.663827  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.14-0 from cache
	I0729 21:19:45.663861  803242 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0729 21:19:45.663914  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0729 21:19:47.428346  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (1.764398579s)
	I0729 21:19:47.428382  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0729 21:19:47.428422  803242 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 21:19:47.428490  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0
	I0729 21:19:49.289552  803242 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.0-beta.0: (1.861019775s)
	I0729 21:19:49.289593  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 from cache
	I0729 21:19:49.289623  803242 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0729 21:19:49.289678  803242 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0729 21:19:49.932166  803242 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0729 21:19:49.932222  803242 cache_images.go:123] Successfully loaded all cached images
	I0729 21:19:49.932233  803242 cache_images.go:92] duration metric: took 14.101251138s to LoadCachedImages
	I0729 21:19:49.932251  803242 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.31.0-beta.0 crio true true} ...
	I0729 21:19:49.932419  803242 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-497952 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-497952 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 21:19:49.932491  803242 ssh_runner.go:195] Run: crio config
	I0729 21:19:49.991697  803242 cni.go:84] Creating CNI manager for ""
	I0729 21:19:49.991725  803242 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:19:49.991736  803242 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 21:19:49.991758  803242 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-497952 NodeName:no-preload-497952 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 21:19:49.991941  803242 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-497952"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.177
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 21:19:49.992042  803242 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0729 21:19:50.006273  803242 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 21:19:50.006379  803242 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 21:19:50.016374  803242 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0729 21:19:50.033133  803242 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0729 21:19:50.049762  803242 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0729 21:19:50.068085  803242 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I0729 21:19:50.072004  803242 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 21:19:50.083304  803242 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:19:50.209402  803242 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 21:19:50.225574  803242 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952 for IP: 192.168.39.177
	I0729 21:19:50.225596  803242 certs.go:194] generating shared ca certs ...
	I0729 21:19:50.225618  803242 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:19:50.225812  803242 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 21:19:50.225880  803242 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 21:19:50.225895  803242 certs.go:256] generating profile certs ...
	I0729 21:19:50.226003  803242 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952/client.key
	I0729 21:19:50.226072  803242 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952/apiserver.key.7d214cb7
	I0729 21:19:50.226125  803242 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952/proxy-client.key
	I0729 21:19:50.226330  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 21:19:50.226380  803242 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 21:19:50.226395  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 21:19:50.226428  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 21:19:50.226462  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 21:19:50.226503  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 21:19:50.226567  803242 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 21:19:50.227356  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 21:19:50.257825  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 21:19:50.287450  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 21:19:50.316341  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 21:19:50.343266  803242 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/no-preload-497952/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 21:19:50.679134  788724 kubeadm.go:310] error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	I0729 21:19:50.679219  788724 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 21:19:50.681302  788724 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 21:19:50.681369  788724 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 21:19:50.681443  788724 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 21:19:50.681597  788724 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 21:19:50.681718  788724 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 21:19:50.681802  788724 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 21:19:50.683827  788724 out.go:204]   - Generating certificates and keys ...
	I0729 21:19:50.683930  788724 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 21:19:50.684026  788724 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 21:19:50.684144  788724 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 21:19:50.684224  788724 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 21:19:50.684292  788724 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 21:19:50.684337  788724 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 21:19:50.684386  788724 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 21:19:50.684435  788724 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 21:19:50.684509  788724 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 21:19:50.684572  788724 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 21:19:50.684600  788724 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 21:19:50.684651  788724 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 21:19:50.684692  788724 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 21:19:50.684739  788724 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 21:19:50.684787  788724 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 21:19:50.684839  788724 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 21:19:50.684882  788724 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 21:19:50.684946  788724 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 21:19:50.684997  788724 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 21:19:50.686474  788724 out.go:204]   - Booting up control plane ...
	I0729 21:19:50.686590  788724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 21:19:50.686651  788724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 21:19:50.686707  788724 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 21:19:50.686785  788724 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 21:19:50.686846  788724 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 21:19:50.686875  788724 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 21:19:50.687026  788724 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 21:19:50.687079  788724 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 21:19:50.687139  788724 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.480639ms
	I0729 21:19:50.687231  788724 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 21:19:50.687312  788724 kubeadm.go:310] [api-check] The API server is not healthy after 4m0.006421971s
	I0729 21:19:50.687317  788724 kubeadm.go:310] 
	I0729 21:19:50.687361  788724 kubeadm.go:310] Unfortunately, an error has occurred:
	I0729 21:19:50.687400  788724 kubeadm.go:310] 	context deadline exceeded
	I0729 21:19:50.687405  788724 kubeadm.go:310] 
	I0729 21:19:50.687446  788724 kubeadm.go:310] This error is likely caused by:
	I0729 21:19:50.687489  788724 kubeadm.go:310] 	- The kubelet is not running
	I0729 21:19:50.687612  788724 kubeadm.go:310] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 21:19:50.687622  788724 kubeadm.go:310] 
	I0729 21:19:50.687759  788724 kubeadm.go:310] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 21:19:50.687785  788724 kubeadm.go:310] 	- 'systemctl status kubelet'
	I0729 21:19:50.687822  788724 kubeadm.go:310] 	- 'journalctl -xeu kubelet'
	I0729 21:19:50.687827  788724 kubeadm.go:310] 
	I0729 21:19:50.687926  788724 kubeadm.go:310] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 21:19:50.688011  788724 kubeadm.go:310] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 21:19:50.688146  788724 kubeadm.go:310] Here is one example how you may list all running Kubernetes containers by using crictl:
	I0729 21:19:50.688275  788724 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 21:19:50.688389  788724 kubeadm.go:310] 	Once you have found the failing container, you can inspect its logs with:
	I0729 21:19:50.688562  788724 kubeadm.go:310] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I0729 21:19:50.688571  788724 kubeadm.go:394] duration metric: took 12m27.739789556s to StartCluster
	I0729 21:19:50.688612  788724 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 21:19:50.688662  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 21:19:50.734567  788724 cri.go:89] found id: "7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19"
	I0729 21:19:50.734579  788724 cri.go:89] found id: ""
	I0729 21:19:50.734588  788724 logs.go:276] 1 containers: [7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19]
	I0729 21:19:50.734645  788724 ssh_runner.go:195] Run: which crictl
	I0729 21:19:50.739724  788724 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 21:19:50.739771  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 21:19:50.781531  788724 cri.go:89] found id: ""
	I0729 21:19:50.781556  788724 logs.go:276] 0 containers: []
	W0729 21:19:50.781563  788724 logs.go:278] No container was found matching "etcd"
	I0729 21:19:50.781569  788724 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 21:19:50.781626  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 21:19:50.819915  788724 cri.go:89] found id: ""
	I0729 21:19:50.819931  788724 logs.go:276] 0 containers: []
	W0729 21:19:50.819939  788724 logs.go:278] No container was found matching "coredns"
	I0729 21:19:50.819945  788724 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 21:19:50.820002  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 21:19:50.857003  788724 cri.go:89] found id: ""
	I0729 21:19:50.857019  788724 logs.go:276] 0 containers: []
	W0729 21:19:50.857027  788724 logs.go:278] No container was found matching "kube-scheduler"
	I0729 21:19:50.857034  788724 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 21:19:50.857092  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 21:19:50.892663  788724 cri.go:89] found id: ""
	I0729 21:19:50.892679  788724 logs.go:276] 0 containers: []
	W0729 21:19:50.892686  788724 logs.go:278] No container was found matching "kube-proxy"
	I0729 21:19:50.892695  788724 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 21:19:50.892750  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 21:19:50.927462  788724 cri.go:89] found id: "35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c"
	I0729 21:19:50.927477  788724 cri.go:89] found id: ""
	I0729 21:19:50.927486  788724 logs.go:276] 1 containers: [35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c]
	I0729 21:19:50.927535  788724 ssh_runner.go:195] Run: which crictl
	I0729 21:19:50.931351  788724 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 21:19:50.931400  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 21:19:50.974615  788724 cri.go:89] found id: ""
	I0729 21:19:50.974635  788724 logs.go:276] 0 containers: []
	W0729 21:19:50.974644  788724 logs.go:278] No container was found matching "kindnet"
	I0729 21:19:50.974650  788724 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0729 21:19:50.974721  788724 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 21:19:51.009923  788724 cri.go:89] found id: ""
	I0729 21:19:51.009943  788724 logs.go:276] 0 containers: []
	W0729 21:19:51.009952  788724 logs.go:278] No container was found matching "storage-provisioner"
	I0729 21:19:51.009966  788724 logs.go:123] Gathering logs for kube-controller-manager [35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c] ...
	I0729 21:19:51.009984  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c"
	I0729 21:19:51.043324  788724 logs.go:123] Gathering logs for CRI-O ...
	I0729 21:19:51.043352  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 21:19:51.258900  788724 logs.go:123] Gathering logs for container status ...
	I0729 21:19:51.258923  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 21:19:51.299680  788724 logs.go:123] Gathering logs for kubelet ...
	I0729 21:19:51.299701  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0729 21:19:51.437887  788724 logs.go:123] Gathering logs for dmesg ...
	I0729 21:19:51.437910  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 21:19:51.451920  788724 logs.go:123] Gathering logs for describe nodes ...
	I0729 21:19:51.451940  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 21:19:51.524849  788724 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 21:19:51.524869  788724 logs.go:123] Gathering logs for kube-apiserver [7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19] ...
	I0729 21:19:51.524884  788724 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19"
	W0729 21:19:51.559080  788724 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.480639ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.006421971s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 21:19:51.559109  788724 out.go:239] * 
	W0729 21:19:51.559181  788724 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.480639ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.006421971s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 21:19:51.559200  788724 out.go:239] * 
	W0729 21:19:51.560026  788724 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 21:19:51.562756  788724 out.go:177] 
	W0729 21:19:51.563817  788724 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.30.3
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 501.480639ms
	[api-check] Waiting for a healthy API server. This can take up to 4m0s
	[api-check] The API server is not healthy after 4m0.006421971s
	
	Unfortunately, an error has occurred:
		context deadline exceeded
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: could not initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 21:19:51.563851  788724 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 21:19:51.563866  788724 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 21:19:51.565031  788724 out.go:177] 
	
	
	==> CRI-O <==
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.151560525Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b5dbbb8-c22a-4943-9e50-7c5316967b23 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.153775452Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b42f3238-e5bc-43e0-9026-c857232afc20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.154390017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287992154307682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b42f3238-e5bc-43e0-9026-c857232afc20 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.157756671Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=26d05d7f-7de5-4e60-942c-cd560fd05b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.157891946Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=26d05d7f-7de5-4e60-942c-cd560fd05b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.157982794Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c,PodSandboxId:b3b3ecb7ae6f8737a42c1f2216ee31a406abcfd7c79975e6c1e358431616dcf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287926469566713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fea26aab68d7d24910fdd5d02cc161,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.contain
er.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19,PodSandboxId:aa6ddbf62f0e75e47e84ddda1a31c43f6df36cd7bcb1c97736c63bfab671d023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287918472576987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18d2c27f8339fe54beac7f489e547c4,},Annotations:map[string]string{io.kubernetes.container.hash: e2d44257,io.kubernetes.container.rest
artCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=26d05d7f-7de5-4e60-942c-cd560fd05b81 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.194195729Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c784d390-cefa-423f-9ed6-f729ed4081dc name=/runtime.v1.RuntimeService/Version
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.194283582Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c784d390-cefa-423f-9ed6-f729ed4081dc name=/runtime.v1.RuntimeService/Version
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.195140748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=73ecfad1-8ea3-445e-8d6f-76ad1c29deba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.195484350Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287992195463671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=73ecfad1-8ea3-445e-8d6f-76ad1c29deba name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.195867478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2e54aa7d-53aa-4df9-9f60-c96381f2209f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.195932993Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2e54aa7d-53aa-4df9-9f60-c96381f2209f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.196005768Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c,PodSandboxId:b3b3ecb7ae6f8737a42c1f2216ee31a406abcfd7c79975e6c1e358431616dcf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287926469566713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fea26aab68d7d24910fdd5d02cc161,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.contain
er.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19,PodSandboxId:aa6ddbf62f0e75e47e84ddda1a31c43f6df36cd7bcb1c97736c63bfab671d023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287918472576987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18d2c27f8339fe54beac7f489e547c4,},Annotations:map[string]string{io.kubernetes.container.hash: e2d44257,io.kubernetes.container.rest
artCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2e54aa7d-53aa-4df9-9f60-c96381f2209f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.218140302Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=51c38cd0-facf-46e4-8ed4-fc98c9e10ec5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.218295368Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b3b3ecb7ae6f8737a42c1f2216ee31a406abcfd7c79975e6c1e358431616dcf5,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-cert-expiration-461577,Uid:51fea26aab68d7d24910fdd5d02cc161,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722287750892625101,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fea26aab68d7d24910fdd5d02cc161,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 51fea26aab68d7d24910fdd5d02cc161,kubernetes.io/config.seen: 2024-07-29T21:15:50.410945590Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2820bdf2ff4fe78f9f483ed2629df321a1e49dc8a42c6efa9510231227323b0c,Metadat
a:&PodSandboxMetadata{Name:kube-scheduler-cert-expiration-461577,Uid:4304ac65ea070120801bc50163739b88,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722287750892492242,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4304ac65ea070120801bc50163739b88,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4304ac65ea070120801bc50163739b88,kubernetes.io/config.seen: 2024-07-29T21:15:50.410946551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:aa6ddbf62f0e75e47e84ddda1a31c43f6df36cd7bcb1c97736c63bfab671d023,Metadata:&PodSandboxMetadata{Name:kube-apiserver-cert-expiration-461577,Uid:a18d2c27f8339fe54beac7f489e547c4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722287750886360567,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernete
s.pod.name: kube-apiserver-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18d2c27f8339fe54beac7f489e547c4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.140:8443,kubernetes.io/config.hash: a18d2c27f8339fe54beac7f489e547c4,kubernetes.io/config.seen: 2024-07-29T21:15:50.410944432Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc67670ce84b07c033100bb7d3a539a1b62182ce94877e519d4f716a7c1ea951,Metadata:&PodSandboxMetadata{Name:etcd-cert-expiration-461577,Uid:6676dd0374818fa6c6aa191640b70b7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722287750872179350,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6676dd0374818fa6c6aa191640b70b7d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.ad
vertise-client-urls: https://192.168.72.140:2379,kubernetes.io/config.hash: 6676dd0374818fa6c6aa191640b70b7d,kubernetes.io/config.seen: 2024-07-29T21:15:50.410940623Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=51c38cd0-facf-46e4-8ed4-fc98c9e10ec5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.218784227Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e2952aa-975b-4630-8b05-a867497369f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.218838123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e2952aa-975b-4630-8b05-a867497369f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.218911332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c,PodSandboxId:b3b3ecb7ae6f8737a42c1f2216ee31a406abcfd7c79975e6c1e358431616dcf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287926469566713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fea26aab68d7d24910fdd5d02cc161,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.contain
er.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19,PodSandboxId:aa6ddbf62f0e75e47e84ddda1a31c43f6df36cd7bcb1c97736c63bfab671d023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287918472576987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18d2c27f8339fe54beac7f489e547c4,},Annotations:map[string]string{io.kubernetes.container.hash: e2d44257,io.kubernetes.container.rest
artCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e2952aa-975b-4630-8b05-a867497369f2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.226287143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09650747-11bb-408b-a4ac-888ef88c21c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.226342321Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09650747-11bb-408b-a4ac-888ef88c21c9 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.227219127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b72e8747-0ab5-43f3-b05e-b693531829fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.227566163Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287992227545287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b72e8747-0ab5-43f3-b05e-b693531829fd name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.228139641Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6b3370f-9cbb-44e2-9297-40977e6a841f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.228188638Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6b3370f-9cbb-44e2-9297-40977e6a841f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:19:52 cert-expiration-461577 crio[2966]: time="2024-07-29 21:19:52.228257124Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c,PodSandboxId:b3b3ecb7ae6f8737a42c1f2216ee31a406abcfd7c79975e6c1e358431616dcf5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:16,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287926469566713,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51fea26aab68d7d24910fdd5d02cc161,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.contain
er.restartCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19,PodSandboxId:aa6ddbf62f0e75e47e84ddda1a31c43f6df36cd7bcb1c97736c63bfab671d023,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:16,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287918472576987,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-cert-expiration-461577,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a18d2c27f8339fe54beac7f489e547c4,},Annotations:map[string]string{io.kubernetes.container.hash: e2d44257,io.kubernetes.container.rest
artCount: 16,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6b3370f-9cbb-44e2-9297-40977e6a841f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	35dec04cdc044       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   About a minute ago   Exited              kube-controller-manager   16                  b3b3ecb7ae6f8       kube-controller-manager-cert-expiration-461577
	7e3932e2f2740       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   About a minute ago   Exited              kube-apiserver            16                  aa6ddbf62f0e7       kube-apiserver-cert-expiration-461577
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.198685] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.129336] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.267176] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +4.136680] systemd-fstab-generator[759]: Ignoring "noauto" option for root device
	[  +4.789573] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +0.082658] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.479143] systemd-fstab-generator[1281]: Ignoring "noauto" option for root device
	[  +0.071205] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.193377] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device
	[  +6.426690] kauditd_printk_skb: 46 callbacks suppressed
	[ +17.124992] kauditd_printk_skb: 61 callbacks suppressed
	[Jul29 21:05] systemd-fstab-generator[2680]: Ignoring "noauto" option for root device
	[  +0.299274] systemd-fstab-generator[2715]: Ignoring "noauto" option for root device
	[  +0.287361] systemd-fstab-generator[2782]: Ignoring "noauto" option for root device
	[  +0.264441] systemd-fstab-generator[2799]: Ignoring "noauto" option for root device
	[  +0.444272] systemd-fstab-generator[2828]: Ignoring "noauto" option for root device
	[Jul29 21:07] systemd-fstab-generator[3073]: Ignoring "noauto" option for root device
	[  +0.129825] kauditd_printk_skb: 181 callbacks suppressed
	[  +6.066716] kauditd_printk_skb: 90 callbacks suppressed
	[  +7.561943] systemd-fstab-generator[3850]: Ignoring "noauto" option for root device
	[ +21.524209] kauditd_printk_skb: 21 callbacks suppressed
	[Jul29 21:11] systemd-fstab-generator[9772]: Ignoring "noauto" option for root device
	[Jul29 21:12] kauditd_printk_skb: 64 callbacks suppressed
	[Jul29 21:15] systemd-fstab-generator[11560]: Ignoring "noauto" option for root device
	[Jul29 21:16] kauditd_printk_skb: 48 callbacks suppressed
	
	
	==> kernel <==
	 21:19:52 up 18 min,  0 users,  load average: 0.11, 0.19, 0.16
	Linux cert-expiration-461577 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19] <==
	I0729 21:18:38.635450       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0729 21:18:39.000008       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:39.000656       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0729 21:18:39.000716       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 21:18:39.007897       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 21:18:39.011507       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 21:18:39.011563       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 21:18:39.011735       1 instance.go:299] Using reconciler: lease
	W0729 21:18:39.012511       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:40.000963       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:40.001128       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:40.012833       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:41.369367       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:41.420469       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:41.658674       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:43.691697       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:43.694237       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:44.225410       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:47.465521       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:48.439952       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:48.440234       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:53.853341       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:54.697177       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 21:18:55.028968       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	F0729 21:18:59.012242       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c] <==
	I0729 21:18:47.039859       1 serving.go:380] Generated self-signed cert in-memory
	I0729 21:18:47.345456       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 21:18:47.345538       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:18:47.346979       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 21:18:47.347603       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:18:47.347761       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 21:18:47.347848       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 21:19:07.350188       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.72.140:8443/healthz\": dial tcp 192.168.72.140:8443: connect: connection refused"
	
	
	==> kubelet <==
	Jul 29 21:19:41 cert-expiration-461577 kubelet[11567]: I0729 21:19:41.270002   11567 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-461577"
	Jul 29 21:19:41 cert-expiration-461577 kubelet[11567]: E0729 21:19:41.271148   11567 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.140:8443: connect: connection refused" node="cert-expiration-461577"
	Jul 29 21:19:41 cert-expiration-461577 kubelet[11567]: I0729 21:19:41.461723   11567 scope.go:117] "RemoveContainer" containerID="35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c"
	Jul 29 21:19:41 cert-expiration-461577 kubelet[11567]: E0729 21:19:41.462408   11567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-461577_kube-system(51fea26aab68d7d24910fdd5d02cc161)\"" pod="kube-system/kube-controller-manager-cert-expiration-461577" podUID="51fea26aab68d7d24910fdd5d02cc161"
	Jul 29 21:19:42 cert-expiration-461577 kubelet[11567]: E0729 21:19:42.253309   11567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-461577?timeout=10s\": dial tcp 192.168.72.140:8443: connect: connection refused" interval="7s"
	Jul 29 21:19:44 cert-expiration-461577 kubelet[11567]: I0729 21:19:44.460619   11567 scope.go:117] "RemoveContainer" containerID="7e3932e2f27408ae1fa4ecc44f82c9414fd294ba8003a1cfb0e56df1c0f61b19"
	Jul 29 21:19:44 cert-expiration-461577 kubelet[11567]: E0729 21:19:44.461100   11567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-cert-expiration-461577_kube-system(a18d2c27f8339fe54beac7f489e547c4)\"" pod="kube-system/kube-apiserver-cert-expiration-461577" podUID="a18d2c27f8339fe54beac7f489e547c4"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: I0729 21:19:48.273997   11567 kubelet_node_status.go:73] "Attempting to register node" node="cert-expiration-461577"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.275003   11567 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.140:8443: connect: connection refused" node="cert-expiration-461577"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.477319   11567 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-scheduler_kube-scheduler-cert-expiration-461577_kube-system_4304ac65ea070120801bc50163739b88_1\" is already in use by 2c0f53fb83a9ae750a8bec78afef22cd989560ba438831d4c7f4fbd4ee2424b2. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="2820bdf2ff4fe78f9f483ed2629df321a1e49dc8a42c6efa9510231227323b0c"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.477706   11567 kuberuntime_manager.go:1256] container &Container{Name:kube-scheduler,Image:registry.k8s.io/kube-scheduler:v1.30.3,Command:[kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=false],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubeconfig,ReadOnly:true,MountPath:/etc/kubernetes/scheduler.conf,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDela
ySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 10259 },Host:127.0.0.1,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-scheduler-cert-expiration-461577_kube-system(4304ac65ea070120801bc50163739b88): CreateContainerError: the container name "k8s_kube-scheduler_kube-scheduler-cert-expiration-461577_kube-system_4304ac65ea070120801bc50163739b88_1" is already in use
by 2c0f53fb83a9ae750a8bec78afef22cd989560ba438831d4c7f4fbd4ee2424b2. You have to remove that container to be able to reuse that name: that name is already in use
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.477823   11567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"the container name \\\"k8s_kube-scheduler_kube-scheduler-cert-expiration-461577_kube-system_4304ac65ea070120801bc50163739b88_1\\\" is already in use by 2c0f53fb83a9ae750a8bec78afef22cd989560ba438831d4c7f4fbd4ee2424b2. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-scheduler-cert-expiration-461577" podUID="4304ac65ea070120801bc50163739b88"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.480645   11567 remote_runtime.go:319] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_etcd_etcd-cert-expiration-461577_kube-system_6676dd0374818fa6c6aa191640b70b7d_1\" is already in use by 0345c58a40ed6f5a6bb1d4fdda642d444a174b453bc13d3ab8dfa4a677cc7ab2. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="dc67670ce84b07c033100bb7d3a539a1b62182ce94877e519d4f716a7c1ea951"
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.480948   11567 kuberuntime_manager.go:1256] container &Container{Name:etcd,Image:registry.k8s.io/etcd:3.5.12-0,Command:[etcd --advertise-client-urls=https://192.168.72.140:2379 --cert-file=/var/lib/minikube/certs/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/minikube/etcd --experimental-initial-corrupt-check=true --experimental-watch-progress-notify-interval=5s --initial-advertise-peer-urls=https://192.168.72.140:2380 --initial-cluster=cert-expiration-461577=https://192.168.72.140:2380 --key-file=/var/lib/minikube/certs/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.72.140:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.72.140:2380 --name=cert-expiration-461577 --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/var/lib/minikube/certs/etcd/peer.key --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
--proxy-refresh-interval=70000 --snapshot-count=10000 --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{104857600 0} {<nil>} 100Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etcd-data,ReadOnly:false,MountPath:/var/lib/minikube/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etcd-certs,ReadOnly:false,MountPath:/var/lib/minikube/certs/etcd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?exclude=NOSPACE&serializable=true,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,TerminationGracePeriodSecon
ds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health?serializable=false,Port:{0 2381 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:24,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod etcd-cert-expiration-461577_kube-system(6676dd0374818fa6c6aa191640b70b7d): CreateContainerError: the container name "k8s_etcd_etcd-cert-expiration-461577_kube-system_6676dd0374818fa6c6aa191640b70b7d_1" is already in use by 0345c58a40ed6f5a6bb1d4fdda642d444a174b453bc13d3ab8dfa4a677cc7ab2. You have to remove that container to be able to reuse tha
t name: that name is already in use
	Jul 29 21:19:48 cert-expiration-461577 kubelet[11567]: E0729 21:19:48.481033   11567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"the container name \\\"k8s_etcd_etcd-cert-expiration-461577_kube-system_6676dd0374818fa6c6aa191640b70b7d_1\\\" is already in use by 0345c58a40ed6f5a6bb1d4fdda642d444a174b453bc13d3ab8dfa4a677cc7ab2. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/etcd-cert-expiration-461577" podUID="6676dd0374818fa6c6aa191640b70b7d"
	Jul 29 21:19:49 cert-expiration-461577 kubelet[11567]: E0729 21:19:49.255246   11567 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-461577?timeout=10s\": dial tcp 192.168.72.140:8443: connect: connection refused" interval="7s"
	Jul 29 21:19:49 cert-expiration-461577 kubelet[11567]: E0729 21:19:49.367010   11567 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.72.140:8443: connect: connection refused" event="&Event{ObjectMeta:{cert-expiration-461577.17e6cb957e78eadd  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:cert-expiration-461577,UID:cert-expiration-461577,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node cert-expiration-461577 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:cert-expiration-461577,},FirstTimestamp:2024-07-29 21:15:50.448212701 +0000 UTC m=+0.326035170,LastTimestamp:2024-07-29 21:15:50.448212701 +0000 UTC m=+0.326035170,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingControll
er:kubelet,ReportingInstance:cert-expiration-461577,}"
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]: E0729 21:19:50.475135   11567 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 21:19:50 cert-expiration-461577 kubelet[11567]: E0729 21:19:50.483380   11567 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"cert-expiration-461577\" not found"
	Jul 29 21:19:52 cert-expiration-461577 kubelet[11567]: I0729 21:19:52.461462   11567 scope.go:117] "RemoveContainer" containerID="35dec04cdc04413fcf2e6f69983db52a8de37404b19f195e5d3643de8162723c"
	Jul 29 21:19:52 cert-expiration-461577 kubelet[11567]: E0729 21:19:52.461909   11567 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-cert-expiration-461577_kube-system(51fea26aab68d7d24910fdd5d02cc161)\"" pod="kube-system/kube-controller-manager-cert-expiration-461577" podUID="51fea26aab68d7d24910fdd5d02cc161"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-461577 -n cert-expiration-461577
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-expiration-461577 -n cert-expiration-461577: exit status 2 (240.774745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-461577" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-461577" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-461577
--- FAIL: TestCertExpiration (1090.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 node stop m02 -v=7 --alsologtostderr
E0729 20:14:36.013904  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:15:57.935068  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.487700968s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344518-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:14:10.988269  759743 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:14:10.988406  759743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:14:10.988420  759743 out.go:304] Setting ErrFile to fd 2...
	I0729 20:14:10.988426  759743 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:14:10.988628  759743 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:14:10.988893  759743 mustload.go:65] Loading cluster: ha-344518
	I0729 20:14:10.989241  759743 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:14:10.989261  759743 stop.go:39] StopHost: ha-344518-m02
	I0729 20:14:10.989646  759743 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:14:10.989696  759743 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:14:11.007526  759743 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46165
	I0729 20:14:11.008043  759743 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:14:11.008707  759743 main.go:141] libmachine: Using API Version  1
	I0729 20:14:11.008733  759743 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:14:11.009135  759743 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:14:11.011575  759743 out.go:177] * Stopping node "ha-344518-m02"  ...
	I0729 20:14:11.013029  759743 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 20:14:11.013075  759743 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:14:11.013347  759743 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 20:14:11.013379  759743 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:14:11.016603  759743 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:14:11.017096  759743 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:14:11.017131  759743 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:14:11.017281  759743 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:14:11.017502  759743 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:14:11.017689  759743 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:14:11.017846  759743 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:14:11.107432  759743 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 20:14:11.160208  759743 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 20:14:11.214769  759743 main.go:141] libmachine: Stopping "ha-344518-m02"...
	I0729 20:14:11.214801  759743 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:14:11.216559  759743 main.go:141] libmachine: (ha-344518-m02) Calling .Stop
	I0729 20:14:11.220354  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 0/120
	I0729 20:14:12.222442  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 1/120
	I0729 20:14:13.224056  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 2/120
	I0729 20:14:14.225437  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 3/120
	I0729 20:14:15.226954  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 4/120
	I0729 20:14:16.229339  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 5/120
	I0729 20:14:17.230961  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 6/120
	I0729 20:14:18.232660  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 7/120
	I0729 20:14:19.234920  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 8/120
	I0729 20:14:20.236548  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 9/120
	I0729 20:14:21.239268  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 10/120
	I0729 20:14:22.240818  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 11/120
	I0729 20:14:23.242898  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 12/120
	I0729 20:14:24.244633  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 13/120
	I0729 20:14:25.246768  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 14/120
	I0729 20:14:26.248303  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 15/120
	I0729 20:14:27.250582  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 16/120
	I0729 20:14:28.251918  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 17/120
	I0729 20:14:29.253514  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 18/120
	I0729 20:14:30.255012  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 19/120
	I0729 20:14:31.257166  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 20/120
	I0729 20:14:32.258784  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 21/120
	I0729 20:14:33.260594  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 22/120
	I0729 20:14:34.262689  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 23/120
	I0729 20:14:35.264696  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 24/120
	I0729 20:14:36.266556  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 25/120
	I0729 20:14:37.267815  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 26/120
	I0729 20:14:38.270017  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 27/120
	I0729 20:14:39.271336  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 28/120
	I0729 20:14:40.272784  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 29/120
	I0729 20:14:41.275122  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 30/120
	I0729 20:14:42.277287  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 31/120
	I0729 20:14:43.279614  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 32/120
	I0729 20:14:44.281729  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 33/120
	I0729 20:14:45.283118  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 34/120
	I0729 20:14:46.285249  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 35/120
	I0729 20:14:47.286820  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 36/120
	I0729 20:14:48.288428  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 37/120
	I0729 20:14:49.290620  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 38/120
	I0729 20:14:50.292944  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 39/120
	I0729 20:14:51.295365  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 40/120
	I0729 20:14:52.296703  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 41/120
	I0729 20:14:53.298351  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 42/120
	I0729 20:14:54.299788  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 43/120
	I0729 20:14:55.301171  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 44/120
	I0729 20:14:56.302630  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 45/120
	I0729 20:14:57.304090  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 46/120
	I0729 20:14:58.305819  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 47/120
	I0729 20:14:59.307202  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 48/120
	I0729 20:15:00.309161  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 49/120
	I0729 20:15:01.311576  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 50/120
	I0729 20:15:02.314073  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 51/120
	I0729 20:15:03.315583  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 52/120
	I0729 20:15:04.317048  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 53/120
	I0729 20:15:05.318991  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 54/120
	I0729 20:15:06.320326  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 55/120
	I0729 20:15:07.322623  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 56/120
	I0729 20:15:08.324188  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 57/120
	I0729 20:15:09.325649  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 58/120
	I0729 20:15:10.327104  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 59/120
	I0729 20:15:11.328856  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 60/120
	I0729 20:15:12.330417  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 61/120
	I0729 20:15:13.331939  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 62/120
	I0729 20:15:14.333306  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 63/120
	I0729 20:15:15.334770  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 64/120
	I0729 20:15:16.336431  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 65/120
	I0729 20:15:17.337964  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 66/120
	I0729 20:15:18.339298  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 67/120
	I0729 20:15:19.341032  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 68/120
	I0729 20:15:20.342929  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 69/120
	I0729 20:15:21.345145  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 70/120
	I0729 20:15:22.347201  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 71/120
	I0729 20:15:23.349268  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 72/120
	I0729 20:15:24.351037  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 73/120
	I0729 20:15:25.353254  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 74/120
	I0729 20:15:26.355390  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 75/120
	I0729 20:15:27.356931  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 76/120
	I0729 20:15:28.359132  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 77/120
	I0729 20:15:29.360431  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 78/120
	I0729 20:15:30.362874  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 79/120
	I0729 20:15:31.365371  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 80/120
	I0729 20:15:32.367557  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 81/120
	I0729 20:15:33.369102  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 82/120
	I0729 20:15:34.370471  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 83/120
	I0729 20:15:35.372390  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 84/120
	I0729 20:15:36.374434  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 85/120
	I0729 20:15:37.376547  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 86/120
	I0729 20:15:38.378020  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 87/120
	I0729 20:15:39.379971  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 88/120
	I0729 20:15:40.381299  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 89/120
	I0729 20:15:41.383472  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 90/120
	I0729 20:15:42.385385  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 91/120
	I0729 20:15:43.387153  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 92/120
	I0729 20:15:44.388546  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 93/120
	I0729 20:15:45.390579  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 94/120
	I0729 20:15:46.392614  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 95/120
	I0729 20:15:47.394227  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 96/120
	I0729 20:15:48.395986  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 97/120
	I0729 20:15:49.397506  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 98/120
	I0729 20:15:50.399148  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 99/120
	I0729 20:15:51.401539  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 100/120
	I0729 20:15:52.403028  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 101/120
	I0729 20:15:53.404725  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 102/120
	I0729 20:15:54.406599  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 103/120
	I0729 20:15:55.407987  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 104/120
	I0729 20:15:56.409611  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 105/120
	I0729 20:15:57.411167  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 106/120
	I0729 20:15:58.412769  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 107/120
	I0729 20:15:59.414236  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 108/120
	I0729 20:16:00.415529  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 109/120
	I0729 20:16:01.417827  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 110/120
	I0729 20:16:02.419208  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 111/120
	I0729 20:16:03.420493  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 112/120
	I0729 20:16:04.422621  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 113/120
	I0729 20:16:05.424044  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 114/120
	I0729 20:16:06.425902  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 115/120
	I0729 20:16:07.427604  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 116/120
	I0729 20:16:08.429466  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 117/120
	I0729 20:16:09.430990  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 118/120
	I0729 20:16:10.432915  759743 main.go:141] libmachine: (ha-344518-m02) Waiting for machine to stop 119/120
	I0729 20:16:11.433483  759743 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 20:16:11.433639  759743 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-344518 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (19.239958357s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:16:11.480056  760199 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:16:11.480186  760199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:11.480195  760199 out.go:304] Setting ErrFile to fd 2...
	I0729 20:16:11.480199  760199 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:11.480389  760199 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:16:11.480570  760199 out.go:298] Setting JSON to false
	I0729 20:16:11.480600  760199 mustload.go:65] Loading cluster: ha-344518
	I0729 20:16:11.480647  760199 notify.go:220] Checking for updates...
	I0729 20:16:11.480959  760199 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:16:11.480975  760199 status.go:255] checking status of ha-344518 ...
	I0729 20:16:11.481340  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.481402  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.498384  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33807
	I0729 20:16:11.498970  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.499819  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.499857  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.500316  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.500559  760199 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:16:11.502257  760199 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:16:11.502277  760199 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:11.502552  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.502589  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.518048  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38891
	I0729 20:16:11.518484  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.518975  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.518997  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.519327  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.519540  760199 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:16:11.522423  760199 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:11.522817  760199 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:11.522843  760199 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:11.522964  760199 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:11.523254  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.523288  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.539060  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
	I0729 20:16:11.539582  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.540067  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.540089  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.540422  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.540599  760199 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:16:11.540837  760199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:11.540871  760199 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:16:11.543439  760199 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:11.543916  760199 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:11.543952  760199 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:11.544124  760199 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:16:11.544319  760199 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:16:11.544510  760199 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:16:11.544655  760199 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:16:11.628549  760199 ssh_runner.go:195] Run: systemctl --version
	I0729 20:16:11.636182  760199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:11.652751  760199 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:11.652779  760199 api_server.go:166] Checking apiserver status ...
	I0729 20:16:11.652827  760199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:11.668731  760199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:16:11.680889  760199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:11.680955  760199 ssh_runner.go:195] Run: ls
	I0729 20:16:11.690002  760199 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:11.696877  760199 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:11.696900  760199 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:16:11.696911  760199 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:11.696926  760199 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:16:11.697224  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.697264  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.713343  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37367
	I0729 20:16:11.713840  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.714281  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.714301  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.714612  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.714802  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:16:11.716194  760199 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:16:11.716213  760199 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:11.716506  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.716546  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.731909  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34529
	I0729 20:16:11.732471  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.732912  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.732934  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.733243  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.733414  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:16:11.736387  760199 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:11.736845  760199 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:11.736871  760199 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:11.737021  760199 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:11.737352  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:11.737393  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:11.752868  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0729 20:16:11.753337  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:11.753838  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:11.753862  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:11.754156  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:11.754363  760199 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:16:11.754561  760199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:11.754583  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:16:11.757061  760199 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:11.757459  760199 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:11.757478  760199 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:11.757667  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:16:11.757848  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:16:11.757995  760199 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:16:11.758193  760199 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:16:30.308279  760199 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:30.308402  760199 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:16:30.308420  760199 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:30.308427  760199 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:16:30.308445  760199 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:30.308453  760199 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:16:30.308770  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.308814  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.324284  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35173
	I0729 20:16:30.324784  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.325336  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.325365  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.325746  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.325990  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:16:30.327740  760199 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:16:30.327759  760199 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:30.328091  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.328138  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.343615  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42149
	I0729 20:16:30.344073  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.344594  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.344623  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.344978  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.345200  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:16:30.348567  760199 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:30.349081  760199 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:30.349107  760199 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:30.349265  760199 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:30.349608  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.349661  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.365947  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38859
	I0729 20:16:30.366384  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.366874  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.366897  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.367201  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.367582  760199 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:16:30.367793  760199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:30.367819  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:16:30.370852  760199 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:30.371397  760199 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:30.371438  760199 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:30.371823  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:16:30.372009  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:16:30.372218  760199 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:16:30.372367  760199 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:16:30.456388  760199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:30.475469  760199 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:30.475508  760199 api_server.go:166] Checking apiserver status ...
	I0729 20:16:30.475555  760199 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:30.490713  760199 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:16:30.500161  760199 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:30.500216  760199 ssh_runner.go:195] Run: ls
	I0729 20:16:30.504687  760199 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:30.508938  760199 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:30.508961  760199 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:16:30.508969  760199 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:30.508993  760199 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:16:30.509275  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.509317  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.525280  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33877
	I0729 20:16:30.526041  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.526713  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.526742  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.527139  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.527369  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:16:30.528937  760199 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:16:30.528955  760199 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:30.529245  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.529299  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.544826  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0729 20:16:30.545281  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.545754  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.545776  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.546133  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.546330  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:16:30.549107  760199 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:30.549644  760199 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:30.549662  760199 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:30.549795  760199 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:30.550085  760199 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:30.550129  760199 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:30.566644  760199 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36847
	I0729 20:16:30.567095  760199 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:30.567551  760199 main.go:141] libmachine: Using API Version  1
	I0729 20:16:30.567579  760199 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:30.567930  760199 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:30.568168  760199 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:16:30.568376  760199 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:30.568398  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:16:30.571086  760199 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:30.571517  760199 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:30.571544  760199 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:30.571667  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:16:30.571829  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:16:30.571987  760199 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:16:30.572154  760199 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:16:30.658306  760199 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:30.673520  760199 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344518 -n ha-344518
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344518 logs -n 25: (1.311791312s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m03_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m04 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp testdata/cp-test.txt                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m04_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03:/home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m03 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-344518 node stop m02 -v=7                                                     | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:09:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:09:06.231628  755599 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:09:06.231745  755599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:09:06.231753  755599 out.go:304] Setting ErrFile to fd 2...
	I0729 20:09:06.231757  755599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:09:06.231921  755599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:09:06.232515  755599 out.go:298] Setting JSON to false
	I0729 20:09:06.233440  755599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13893,"bootTime":1722269853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:09:06.233498  755599 start.go:139] virtualization: kvm guest
	I0729 20:09:06.235386  755599 out.go:177] * [ha-344518] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:09:06.236562  755599 notify.go:220] Checking for updates...
	I0729 20:09:06.236588  755599 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:09:06.238002  755599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:09:06.239211  755599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:09:06.240449  755599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:06.241551  755599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:09:06.242850  755599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:09:06.244188  755599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:09:06.278842  755599 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 20:09:06.280106  755599 start.go:297] selected driver: kvm2
	I0729 20:09:06.280121  755599 start.go:901] validating driver "kvm2" against <nil>
	I0729 20:09:06.280147  755599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:09:06.280916  755599 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:09:06.280994  755599 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:09:06.296612  755599 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:09:06.296658  755599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 20:09:06.296868  755599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:09:06.296926  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:06.296937  755599 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 20:09:06.296945  755599 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 20:09:06.296993  755599 start.go:340] cluster config:
	{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 20:09:06.297084  755599 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:09:06.298814  755599 out.go:177] * Starting "ha-344518" primary control-plane node in "ha-344518" cluster
	I0729 20:09:06.299933  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:09:06.299968  755599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:09:06.299979  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:09:06.300071  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:09:06.300082  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:09:06.300394  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:09:06.300421  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json: {Name:mk224013752309fc375b2d4f8dabe788d7615796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:06.300553  755599 start.go:360] acquireMachinesLock for ha-344518: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:09:06.300579  755599 start.go:364] duration metric: took 14.513µs to acquireMachinesLock for "ha-344518"
	I0729 20:09:06.300594  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:09:06.300656  755599 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 20:09:06.302205  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:09:06.302327  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:09:06.302360  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:09:06.316692  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 20:09:06.317211  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:09:06.317813  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:09:06.317837  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:09:06.318209  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:09:06.318430  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:06.318601  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:06.318777  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:09:06.318804  755599 client.go:168] LocalClient.Create starting
	I0729 20:09:06.318838  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:09:06.318870  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:09:06.318887  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:09:06.318949  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:09:06.318966  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:09:06.318979  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:09:06.318994  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:09:06.319006  755599 main.go:141] libmachine: (ha-344518) Calling .PreCreateCheck
	I0729 20:09:06.319328  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:06.319715  755599 main.go:141] libmachine: Creating machine...
	I0729 20:09:06.319729  755599 main.go:141] libmachine: (ha-344518) Calling .Create
	I0729 20:09:06.319853  755599 main.go:141] libmachine: (ha-344518) Creating KVM machine...
	I0729 20:09:06.320964  755599 main.go:141] libmachine: (ha-344518) DBG | found existing default KVM network
	I0729 20:09:06.321728  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.321596  755622 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f350}
	I0729 20:09:06.321744  755599 main.go:141] libmachine: (ha-344518) DBG | created network xml: 
	I0729 20:09:06.321754  755599 main.go:141] libmachine: (ha-344518) DBG | <network>
	I0729 20:09:06.321761  755599 main.go:141] libmachine: (ha-344518) DBG |   <name>mk-ha-344518</name>
	I0729 20:09:06.321770  755599 main.go:141] libmachine: (ha-344518) DBG |   <dns enable='no'/>
	I0729 20:09:06.321776  755599 main.go:141] libmachine: (ha-344518) DBG |   
	I0729 20:09:06.321785  755599 main.go:141] libmachine: (ha-344518) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 20:09:06.321793  755599 main.go:141] libmachine: (ha-344518) DBG |     <dhcp>
	I0729 20:09:06.321803  755599 main.go:141] libmachine: (ha-344518) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 20:09:06.321818  755599 main.go:141] libmachine: (ha-344518) DBG |     </dhcp>
	I0729 20:09:06.321856  755599 main.go:141] libmachine: (ha-344518) DBG |   </ip>
	I0729 20:09:06.321889  755599 main.go:141] libmachine: (ha-344518) DBG |   
	I0729 20:09:06.321972  755599 main.go:141] libmachine: (ha-344518) DBG | </network>
	I0729 20:09:06.321990  755599 main.go:141] libmachine: (ha-344518) DBG | 
	I0729 20:09:06.326724  755599 main.go:141] libmachine: (ha-344518) DBG | trying to create private KVM network mk-ha-344518 192.168.39.0/24...
	I0729 20:09:06.392240  755599 main.go:141] libmachine: (ha-344518) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 ...
	I0729 20:09:06.392278  755599 main.go:141] libmachine: (ha-344518) DBG | private KVM network mk-ha-344518 192.168.39.0/24 created
	I0729 20:09:06.392303  755599 main.go:141] libmachine: (ha-344518) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:09:06.392343  755599 main.go:141] libmachine: (ha-344518) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:09:06.392361  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.392092  755622 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:06.662139  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.662000  755622 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa...
	I0729 20:09:07.120112  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:07.119894  755622 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/ha-344518.rawdisk...
	I0729 20:09:07.120156  755599 main.go:141] libmachine: (ha-344518) DBG | Writing magic tar header
	I0729 20:09:07.120174  755599 main.go:141] libmachine: (ha-344518) DBG | Writing SSH key tar header
	I0729 20:09:07.120201  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:07.120077  755622 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 ...
	I0729 20:09:07.120218  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518
	I0729 20:09:07.120249  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:09:07.120266  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 (perms=drwx------)
	I0729 20:09:07.120285  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:09:07.120299  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:07.120317  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:09:07.120328  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:09:07.120337  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:09:07.120346  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home
	I0729 20:09:07.120362  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:09:07.120373  755599 main.go:141] libmachine: (ha-344518) DBG | Skipping /home - not owner
	I0729 20:09:07.120386  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:09:07.120400  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:09:07.120409  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:09:07.120418  755599 main.go:141] libmachine: (ha-344518) Creating domain...
	I0729 20:09:07.121549  755599 main.go:141] libmachine: (ha-344518) define libvirt domain using xml: 
	I0729 20:09:07.121577  755599 main.go:141] libmachine: (ha-344518) <domain type='kvm'>
	I0729 20:09:07.121608  755599 main.go:141] libmachine: (ha-344518)   <name>ha-344518</name>
	I0729 20:09:07.121628  755599 main.go:141] libmachine: (ha-344518)   <memory unit='MiB'>2200</memory>
	I0729 20:09:07.121637  755599 main.go:141] libmachine: (ha-344518)   <vcpu>2</vcpu>
	I0729 20:09:07.121645  755599 main.go:141] libmachine: (ha-344518)   <features>
	I0729 20:09:07.121650  755599 main.go:141] libmachine: (ha-344518)     <acpi/>
	I0729 20:09:07.121658  755599 main.go:141] libmachine: (ha-344518)     <apic/>
	I0729 20:09:07.121663  755599 main.go:141] libmachine: (ha-344518)     <pae/>
	I0729 20:09:07.121672  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.121679  755599 main.go:141] libmachine: (ha-344518)   </features>
	I0729 20:09:07.121687  755599 main.go:141] libmachine: (ha-344518)   <cpu mode='host-passthrough'>
	I0729 20:09:07.121704  755599 main.go:141] libmachine: (ha-344518)   
	I0729 20:09:07.121712  755599 main.go:141] libmachine: (ha-344518)   </cpu>
	I0729 20:09:07.121716  755599 main.go:141] libmachine: (ha-344518)   <os>
	I0729 20:09:07.121720  755599 main.go:141] libmachine: (ha-344518)     <type>hvm</type>
	I0729 20:09:07.121725  755599 main.go:141] libmachine: (ha-344518)     <boot dev='cdrom'/>
	I0729 20:09:07.121732  755599 main.go:141] libmachine: (ha-344518)     <boot dev='hd'/>
	I0729 20:09:07.121737  755599 main.go:141] libmachine: (ha-344518)     <bootmenu enable='no'/>
	I0729 20:09:07.121743  755599 main.go:141] libmachine: (ha-344518)   </os>
	I0729 20:09:07.121748  755599 main.go:141] libmachine: (ha-344518)   <devices>
	I0729 20:09:07.121759  755599 main.go:141] libmachine: (ha-344518)     <disk type='file' device='cdrom'>
	I0729 20:09:07.121798  755599 main.go:141] libmachine: (ha-344518)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/boot2docker.iso'/>
	I0729 20:09:07.121834  755599 main.go:141] libmachine: (ha-344518)       <target dev='hdc' bus='scsi'/>
	I0729 20:09:07.121858  755599 main.go:141] libmachine: (ha-344518)       <readonly/>
	I0729 20:09:07.121871  755599 main.go:141] libmachine: (ha-344518)     </disk>
	I0729 20:09:07.121883  755599 main.go:141] libmachine: (ha-344518)     <disk type='file' device='disk'>
	I0729 20:09:07.121897  755599 main.go:141] libmachine: (ha-344518)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:09:07.121917  755599 main.go:141] libmachine: (ha-344518)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/ha-344518.rawdisk'/>
	I0729 20:09:07.121934  755599 main.go:141] libmachine: (ha-344518)       <target dev='hda' bus='virtio'/>
	I0729 20:09:07.121946  755599 main.go:141] libmachine: (ha-344518)     </disk>
	I0729 20:09:07.121957  755599 main.go:141] libmachine: (ha-344518)     <interface type='network'>
	I0729 20:09:07.121970  755599 main.go:141] libmachine: (ha-344518)       <source network='mk-ha-344518'/>
	I0729 20:09:07.121979  755599 main.go:141] libmachine: (ha-344518)       <model type='virtio'/>
	I0729 20:09:07.122010  755599 main.go:141] libmachine: (ha-344518)     </interface>
	I0729 20:09:07.122026  755599 main.go:141] libmachine: (ha-344518)     <interface type='network'>
	I0729 20:09:07.122038  755599 main.go:141] libmachine: (ha-344518)       <source network='default'/>
	I0729 20:09:07.122045  755599 main.go:141] libmachine: (ha-344518)       <model type='virtio'/>
	I0729 20:09:07.122055  755599 main.go:141] libmachine: (ha-344518)     </interface>
	I0729 20:09:07.122064  755599 main.go:141] libmachine: (ha-344518)     <serial type='pty'>
	I0729 20:09:07.122074  755599 main.go:141] libmachine: (ha-344518)       <target port='0'/>
	I0729 20:09:07.122092  755599 main.go:141] libmachine: (ha-344518)     </serial>
	I0729 20:09:07.122103  755599 main.go:141] libmachine: (ha-344518)     <console type='pty'>
	I0729 20:09:07.122114  755599 main.go:141] libmachine: (ha-344518)       <target type='serial' port='0'/>
	I0729 20:09:07.122134  755599 main.go:141] libmachine: (ha-344518)     </console>
	I0729 20:09:07.122146  755599 main.go:141] libmachine: (ha-344518)     <rng model='virtio'>
	I0729 20:09:07.122163  755599 main.go:141] libmachine: (ha-344518)       <backend model='random'>/dev/random</backend>
	I0729 20:09:07.122174  755599 main.go:141] libmachine: (ha-344518)     </rng>
	I0729 20:09:07.122184  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.122195  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.122205  755599 main.go:141] libmachine: (ha-344518)   </devices>
	I0729 20:09:07.122214  755599 main.go:141] libmachine: (ha-344518) </domain>
	I0729 20:09:07.122223  755599 main.go:141] libmachine: (ha-344518) 
	I0729 20:09:07.126629  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:b8:f5:36 in network default
	I0729 20:09:07.127217  755599 main.go:141] libmachine: (ha-344518) Ensuring networks are active...
	I0729 20:09:07.127238  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:07.127866  755599 main.go:141] libmachine: (ha-344518) Ensuring network default is active
	I0729 20:09:07.128161  755599 main.go:141] libmachine: (ha-344518) Ensuring network mk-ha-344518 is active
	I0729 20:09:07.128730  755599 main.go:141] libmachine: (ha-344518) Getting domain xml...
	I0729 20:09:07.129444  755599 main.go:141] libmachine: (ha-344518) Creating domain...
	I0729 20:09:08.325465  755599 main.go:141] libmachine: (ha-344518) Waiting to get IP...
	I0729 20:09:08.326138  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.326574  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.326614  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.326561  755622 retry.go:31] will retry after 224.638769ms: waiting for machine to come up
	I0729 20:09:08.553151  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.553679  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.553709  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.553642  755622 retry.go:31] will retry after 360.458872ms: waiting for machine to come up
	I0729 20:09:08.915165  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.915618  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.915650  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.915542  755622 retry.go:31] will retry after 382.171333ms: waiting for machine to come up
	I0729 20:09:09.299192  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:09.299704  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:09.299726  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:09.299643  755622 retry.go:31] will retry after 574.829345ms: waiting for machine to come up
	I0729 20:09:09.876480  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:09.876900  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:09.876929  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:09.876838  755622 retry.go:31] will retry after 617.694165ms: waiting for machine to come up
	I0729 20:09:10.495627  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:10.496026  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:10.496077  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:10.495986  755622 retry.go:31] will retry after 847.62874ms: waiting for machine to come up
	I0729 20:09:11.345637  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:11.346047  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:11.346086  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:11.345988  755622 retry.go:31] will retry after 1.112051252s: waiting for machine to come up
	I0729 20:09:12.460263  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:12.460801  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:12.460828  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:12.460733  755622 retry.go:31] will retry after 1.450822293s: waiting for machine to come up
	I0729 20:09:13.913413  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:13.913807  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:13.913837  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:13.913758  755622 retry.go:31] will retry after 1.204942537s: waiting for machine to come up
	I0729 20:09:15.120158  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:15.120563  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:15.120597  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:15.120536  755622 retry.go:31] will retry after 1.553270386s: waiting for machine to come up
	I0729 20:09:16.675191  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:16.675649  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:16.675680  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:16.675591  755622 retry.go:31] will retry after 2.793041861s: waiting for machine to come up
	I0729 20:09:19.472545  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:19.472921  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:19.472942  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:19.472867  755622 retry.go:31] will retry after 2.196371552s: waiting for machine to come up
	I0729 20:09:21.670777  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:21.671128  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:21.671160  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:21.671075  755622 retry.go:31] will retry after 4.263171271s: waiting for machine to come up
	I0729 20:09:25.939488  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:25.940003  755599 main.go:141] libmachine: (ha-344518) Found IP for machine: 192.168.39.238
	I0729 20:09:25.940019  755599 main.go:141] libmachine: (ha-344518) Reserving static IP address...
	I0729 20:09:25.940057  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has current primary IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:25.940425  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find host DHCP lease matching {name: "ha-344518", mac: "52:54:00:e2:94:80", ip: "192.168.39.238"} in network mk-ha-344518
	I0729 20:09:26.013283  755599 main.go:141] libmachine: (ha-344518) DBG | Getting to WaitForSSH function...
	I0729 20:09:26.013316  755599 main.go:141] libmachine: (ha-344518) Reserved static IP address: 192.168.39.238
	I0729 20:09:26.013329  755599 main.go:141] libmachine: (ha-344518) Waiting for SSH to be available...
	I0729 20:09:26.016100  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:26.016491  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518
	I0729 20:09:26.016526  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find defined IP address of network mk-ha-344518 interface with MAC address 52:54:00:e2:94:80
	I0729 20:09:26.016697  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH client type: external
	I0729 20:09:26.016723  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa (-rw-------)
	I0729 20:09:26.016752  755599 main.go:141] libmachine: (ha-344518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:09:26.016767  755599 main.go:141] libmachine: (ha-344518) DBG | About to run SSH command:
	I0729 20:09:26.016778  755599 main.go:141] libmachine: (ha-344518) DBG | exit 0
	I0729 20:09:26.020608  755599 main.go:141] libmachine: (ha-344518) DBG | SSH cmd err, output: exit status 255: 
	I0729 20:09:26.020626  755599 main.go:141] libmachine: (ha-344518) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 20:09:26.020633  755599 main.go:141] libmachine: (ha-344518) DBG | command : exit 0
	I0729 20:09:26.020641  755599 main.go:141] libmachine: (ha-344518) DBG | err     : exit status 255
	I0729 20:09:26.020651  755599 main.go:141] libmachine: (ha-344518) DBG | output  : 
	I0729 20:09:29.021997  755599 main.go:141] libmachine: (ha-344518) DBG | Getting to WaitForSSH function...
	I0729 20:09:29.024803  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.025367  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.025408  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.025586  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH client type: external
	I0729 20:09:29.025624  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa (-rw-------)
	I0729 20:09:29.025655  755599 main.go:141] libmachine: (ha-344518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:09:29.025670  755599 main.go:141] libmachine: (ha-344518) DBG | About to run SSH command:
	I0729 20:09:29.025683  755599 main.go:141] libmachine: (ha-344518) DBG | exit 0
	I0729 20:09:29.147987  755599 main.go:141] libmachine: (ha-344518) DBG | SSH cmd err, output: <nil>: 
	I0729 20:09:29.148225  755599 main.go:141] libmachine: (ha-344518) KVM machine creation complete!
	I0729 20:09:29.148740  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:29.149286  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:29.149482  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:29.149639  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:09:29.149657  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:09:29.150765  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:09:29.150780  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:09:29.150786  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:09:29.150792  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.153178  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.153584  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.153629  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.153741  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.153910  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.154078  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.154233  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.154381  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.154599  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.154615  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:09:29.255168  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:09:29.255191  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:09:29.255198  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.258198  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.258528  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.258570  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.258733  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.258956  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.259147  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.259303  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.259460  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.259658  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.259671  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:09:29.360632  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:09:29.360702  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:09:29.360709  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:09:29.360717  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.360983  755599 buildroot.go:166] provisioning hostname "ha-344518"
	I0729 20:09:29.361016  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.361230  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.363712  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.364003  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.364024  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.364212  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.364387  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.364632  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.364808  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.364994  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.365155  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.365166  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518 && echo "ha-344518" | sudo tee /etc/hostname
	I0729 20:09:29.482065  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:09:29.482099  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.485276  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.485636  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.485664  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.485828  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.486070  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.486314  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.486479  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.486680  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.486859  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.486876  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:09:29.596714  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:09:29.596745  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:09:29.596764  755599 buildroot.go:174] setting up certificates
	I0729 20:09:29.596775  755599 provision.go:84] configureAuth start
	I0729 20:09:29.596783  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.597068  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:29.599699  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.600142  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.600171  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.600336  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.602797  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.603076  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.603123  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.603328  755599 provision.go:143] copyHostCerts
	I0729 20:09:29.603364  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:09:29.603407  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:09:29.603420  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:09:29.603500  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:09:29.603609  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:09:29.603644  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:09:29.603655  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:09:29.603697  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:09:29.603760  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:09:29.603788  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:09:29.603798  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:09:29.603831  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:09:29.603894  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518 san=[127.0.0.1 192.168.39.238 ha-344518 localhost minikube]
	I0729 20:09:29.704896  755599 provision.go:177] copyRemoteCerts
	I0729 20:09:29.704996  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:09:29.705021  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.707815  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.708151  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.708173  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.708381  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.708562  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.708701  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.708815  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:29.789970  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:09:29.790054  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:09:29.811978  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:09:29.812070  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 20:09:29.833425  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:09:29.833516  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:09:29.855095  755599 provision.go:87] duration metric: took 258.307019ms to configureAuth
	I0729 20:09:29.855125  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:09:29.855328  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:09:29.855418  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.858154  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.858489  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.858515  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.858679  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.858885  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.859022  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.859206  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.859347  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.859508  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.859530  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:09:30.108935  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:09:30.108960  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:09:30.108969  755599 main.go:141] libmachine: (ha-344518) Calling .GetURL
	I0729 20:09:30.110328  755599 main.go:141] libmachine: (ha-344518) DBG | Using libvirt version 6000000
	I0729 20:09:30.112412  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.112803  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.112831  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.112994  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:09:30.113013  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:09:30.113020  755599 client.go:171] duration metric: took 23.794206805s to LocalClient.Create
	I0729 20:09:30.113043  755599 start.go:167] duration metric: took 23.79426731s to libmachine.API.Create "ha-344518"
	I0729 20:09:30.113053  755599 start.go:293] postStartSetup for "ha-344518" (driver="kvm2")
	I0729 20:09:30.113062  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:09:30.113077  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.113372  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:09:30.113421  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.115495  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.115798  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.115825  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.116023  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.116223  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.116398  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.116596  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.198103  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:09:30.202176  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:09:30.202216  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:09:30.202297  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:09:30.202392  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:09:30.202404  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:09:30.202493  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:09:30.211254  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:09:30.233268  755599 start.go:296] duration metric: took 120.202296ms for postStartSetup
	I0729 20:09:30.233331  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:30.234049  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:30.236633  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.236926  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.236972  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.237181  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:09:30.237355  755599 start.go:128] duration metric: took 23.936687923s to createHost
	I0729 20:09:30.237381  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.239552  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.239809  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.239842  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.239987  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.240179  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.240344  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.240483  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.240647  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:30.240821  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:30.240831  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:09:30.344583  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283770.321268422
	
	I0729 20:09:30.344619  755599 fix.go:216] guest clock: 1722283770.321268422
	I0729 20:09:30.344627  755599 fix.go:229] Guest: 2024-07-29 20:09:30.321268422 +0000 UTC Remote: 2024-07-29 20:09:30.237366573 +0000 UTC m=+24.042639080 (delta=83.901849ms)
	I0729 20:09:30.344649  755599 fix.go:200] guest clock delta is within tolerance: 83.901849ms
	I0729 20:09:30.344655  755599 start.go:83] releasing machines lock for "ha-344518", held for 24.044068964s
	I0729 20:09:30.344677  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.344929  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:30.347733  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.348070  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.348103  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.348263  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.348804  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.348977  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.349086  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:09:30.349152  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.349175  755599 ssh_runner.go:195] Run: cat /version.json
	I0729 20:09:30.349199  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.352011  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352060  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352365  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.352393  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352424  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.352444  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352520  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.352709  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.352726  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.352868  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.352878  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.353002  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.353079  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.353122  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.457590  755599 ssh_runner.go:195] Run: systemctl --version
	I0729 20:09:30.463336  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:09:30.618446  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:09:30.624168  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:09:30.624257  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:09:30.639417  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:09:30.639452  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:09:30.639529  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:09:30.656208  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:09:30.669079  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:09:30.669165  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:09:30.682267  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:09:30.695146  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:09:30.801367  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:09:30.933238  755599 docker.go:232] disabling docker service ...
	I0729 20:09:30.933329  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:09:30.946563  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:09:30.958984  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:09:31.083789  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:09:31.192813  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:09:31.208251  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:09:31.226231  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:09:31.226295  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.236691  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:09:31.236766  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.246449  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.256666  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.266826  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:09:31.276417  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.285691  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.300856  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.310029  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:09:31.318257  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:09:31.318321  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:09:31.329044  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:09:31.337242  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:09:31.439976  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:09:31.568009  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:09:31.568114  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:09:31.572733  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:09:31.572795  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:09:31.576009  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:09:31.612236  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:09:31.612336  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:09:31.637427  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:09:31.663928  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:09:31.665127  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:31.667692  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:31.667981  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:31.668000  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:31.668234  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:09:31.672061  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:09:31.684203  755599 kubeadm.go:883] updating cluster {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:09:31.684303  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:09:31.684354  755599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:09:31.713791  755599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 20:09:31.713860  755599 ssh_runner.go:195] Run: which lz4
	I0729 20:09:31.717278  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 20:09:31.717389  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 20:09:31.721078  755599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 20:09:31.721114  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 20:09:32.888232  755599 crio.go:462] duration metric: took 1.170872647s to copy over tarball
	I0729 20:09:32.888342  755599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 20:09:34.911526  755599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023148425s)
	I0729 20:09:34.911564  755599 crio.go:469] duration metric: took 2.023293724s to extract the tarball
	I0729 20:09:34.911572  755599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 20:09:34.949385  755599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:09:34.996988  755599 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:09:34.997024  755599 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:09:34.997039  755599 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.3 crio true true} ...
	I0729 20:09:34.997188  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:09:34.997274  755599 ssh_runner.go:195] Run: crio config
	I0729 20:09:35.039660  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:35.039682  755599 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 20:09:35.039693  755599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:09:35.039715  755599 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344518 NodeName:ha-344518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:09:35.039844  755599 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:09:35.039866  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:09:35.039914  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:09:35.054787  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:09:35.054924  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:09:35.055003  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:09:35.064723  755599 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:09:35.064797  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 20:09:35.073848  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 20:09:35.088657  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:09:35.103369  755599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 20:09:35.118598  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 20:09:35.133021  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:09:35.136443  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:09:35.147245  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:09:35.272541  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:09:35.287804  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.238
	I0729 20:09:35.287823  755599 certs.go:194] generating shared ca certs ...
	I0729 20:09:35.287839  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.287986  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:09:35.288021  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:09:35.288049  755599 certs.go:256] generating profile certs ...
	I0729 20:09:35.288127  755599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:09:35.288146  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt with IP's: []
	I0729 20:09:35.800414  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt ...
	I0729 20:09:35.800449  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt: {Name:mka4861ceb4d2b4f4f8e00578a58573ad449da85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.800649  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key ...
	I0729 20:09:35.800665  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key: {Name:mkc963128b999a495ef61bfb68512b3764f6d860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.800770  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5
	I0729 20:09:35.800790  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.254]
	I0729 20:09:35.908817  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 ...
	I0729 20:09:35.908862  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5: {Name:mk1a566c5922b43f8e6d1c091786f27e0530099b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.909074  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5 ...
	I0729 20:09:35.909098  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5: {Name:mk8d83972e312290d7873f49017743d9eba53fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.909210  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:09:35.909349  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:09:35.909454  755599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:09:35.909478  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt with IP's: []
	I0729 20:09:36.165670  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt ...
	I0729 20:09:36.165713  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt: {Name:mk37e45b34dcfba0257c9845376f02e95587a990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:36.165909  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key ...
	I0729 20:09:36.165925  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key: {Name:mk99e5bb71e0f27e47589639f230663907745de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:36.166020  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:09:36.166044  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:09:36.166060  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:09:36.166080  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:09:36.166096  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:09:36.166115  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:09:36.166136  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:09:36.166156  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:09:36.166231  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:09:36.166282  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:09:36.166295  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:09:36.166333  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:09:36.166366  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:09:36.166404  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:09:36.166461  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:09:36.166500  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.166520  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.166540  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.167816  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:09:36.193720  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:09:36.214608  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:09:36.258795  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:09:36.280321  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 20:09:36.301377  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 20:09:36.322311  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:09:36.346369  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:09:36.370285  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:09:36.393965  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:09:36.417695  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:09:36.441939  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:09:36.458556  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:09:36.464155  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:09:36.473986  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.478117  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.478159  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.483562  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:09:36.493208  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:09:36.502511  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.506272  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.506324  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.511358  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:09:36.520556  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:09:36.529563  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.533411  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.533459  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.538498  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:09:36.547613  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:09:36.551115  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:09:36.551173  755599 kubeadm.go:392] StartCluster: {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:09:36.551254  755599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:09:36.551306  755599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:09:36.585331  755599 cri.go:89] found id: ""
	I0729 20:09:36.585402  755599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 20:09:36.594338  755599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 20:09:36.602921  755599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 20:09:36.612154  755599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 20:09:36.612174  755599 kubeadm.go:157] found existing configuration files:
	
	I0729 20:09:36.612213  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 20:09:36.621367  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 20:09:36.621424  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 20:09:36.631445  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 20:09:36.641207  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 20:09:36.641263  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 20:09:36.651213  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 20:09:36.660732  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 20:09:36.660794  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 20:09:36.670772  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 20:09:36.680213  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 20:09:36.680261  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 20:09:36.690019  755599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 20:09:36.791377  755599 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 20:09:36.791450  755599 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 20:09:36.934210  755599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 20:09:36.934358  755599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 20:09:36.934470  755599 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 20:09:37.145429  755599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 20:09:37.261585  755599 out.go:204]   - Generating certificates and keys ...
	I0729 20:09:37.261702  755599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 20:09:37.261764  755599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 20:09:37.369535  755599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 20:09:37.493916  755599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 20:09:37.819344  755599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 20:09:38.049749  755599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 20:09:38.109721  755599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 20:09:38.109958  755599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-344518 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0729 20:09:38.237477  755599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 20:09:38.237784  755599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-344518 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0729 20:09:38.391581  755599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 20:09:38.620918  755599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 20:09:38.819819  755599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 20:09:38.820100  755599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 20:09:39.226621  755599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 20:09:39.506614  755599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 20:09:39.675030  755599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 20:09:39.813232  755599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 20:09:40.000149  755599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 20:09:40.000850  755599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 20:09:40.003796  755599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 20:09:40.005618  755599 out.go:204]   - Booting up control plane ...
	I0729 20:09:40.005729  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 20:09:40.005821  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 20:09:40.006162  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 20:09:40.027464  755599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 20:09:40.028255  755599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 20:09:40.028317  755599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 20:09:40.157807  755599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 20:09:40.157940  755599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 20:09:41.158624  755599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001286373s
	I0729 20:09:41.158748  755599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 20:09:46.897953  755599 kubeadm.go:310] [api-check] The API server is healthy after 5.742089048s
	I0729 20:09:46.910263  755599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 20:09:46.955430  755599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 20:09:46.982828  755599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 20:09:46.983075  755599 kubeadm.go:310] [mark-control-plane] Marking the node ha-344518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 20:09:46.995548  755599 kubeadm.go:310] [bootstrap-token] Using token: lcul30.lktilqyd6grpi0f8
	I0729 20:09:46.997450  755599 out.go:204]   - Configuring RBAC rules ...
	I0729 20:09:46.997610  755599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 20:09:47.009334  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 20:09:47.018324  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 20:09:47.022284  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 20:09:47.026819  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 20:09:47.029990  755599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 20:09:47.306455  755599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 20:09:47.731708  755599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 20:09:48.305722  755599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 20:09:48.306832  755599 kubeadm.go:310] 
	I0729 20:09:48.306923  755599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 20:09:48.306936  755599 kubeadm.go:310] 
	I0729 20:09:48.307036  755599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 20:09:48.307049  755599 kubeadm.go:310] 
	I0729 20:09:48.307091  755599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 20:09:48.307166  755599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 20:09:48.307230  755599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 20:09:48.307240  755599 kubeadm.go:310] 
	I0729 20:09:48.307341  755599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 20:09:48.307362  755599 kubeadm.go:310] 
	I0729 20:09:48.307430  755599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 20:09:48.307441  755599 kubeadm.go:310] 
	I0729 20:09:48.307519  755599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 20:09:48.307628  755599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 20:09:48.307745  755599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 20:09:48.307765  755599 kubeadm.go:310] 
	I0729 20:09:48.307896  755599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 20:09:48.308052  755599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 20:09:48.308063  755599 kubeadm.go:310] 
	I0729 20:09:48.308190  755599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lcul30.lktilqyd6grpi0f8 \
	I0729 20:09:48.308329  755599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 \
	I0729 20:09:48.308364  755599 kubeadm.go:310] 	--control-plane 
	I0729 20:09:48.308371  755599 kubeadm.go:310] 
	I0729 20:09:48.308465  755599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 20:09:48.308480  755599 kubeadm.go:310] 
	I0729 20:09:48.308584  755599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lcul30.lktilqyd6grpi0f8 \
	I0729 20:09:48.308756  755599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 
	I0729 20:09:48.308937  755599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 20:09:48.308953  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:48.308965  755599 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 20:09:48.311540  755599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 20:09:48.312840  755599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 20:09:48.317892  755599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 20:09:48.317910  755599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 20:09:48.335029  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 20:09:48.651905  755599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 20:09:48.652057  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518 minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=true
	I0729 20:09:48.652059  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:48.671597  755599 ops.go:34] apiserver oom_adj: -16
	I0729 20:09:48.802213  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:49.302225  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:49.802558  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:50.302810  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:50.802819  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:51.302714  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:51.802603  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:52.302970  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:52.803281  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:53.302597  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:53.802801  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:54.302351  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:54.803097  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:55.302327  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:55.803096  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:56.302252  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:56.802499  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:57.303044  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:57.803175  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:58.302637  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:58.803287  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:59.303208  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:59.802965  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.303191  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.802305  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.876177  755599 kubeadm.go:1113] duration metric: took 12.224218004s to wait for elevateKubeSystemPrivileges
	I0729 20:10:00.876216  755599 kubeadm.go:394] duration metric: took 24.325047279s to StartCluster
	I0729 20:10:00.876241  755599 settings.go:142] acquiring lock: {Name:mk9a2eb797f60b19768f4bfa250a8d2214a5ca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:00.876354  755599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:10:00.877048  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:00.877284  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 20:10:00.877294  755599 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:00.877338  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:10:00.877348  755599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 20:10:00.877412  755599 addons.go:69] Setting storage-provisioner=true in profile "ha-344518"
	I0729 20:10:00.877426  755599 addons.go:69] Setting default-storageclass=true in profile "ha-344518"
	I0729 20:10:00.877448  755599 addons.go:234] Setting addon storage-provisioner=true in "ha-344518"
	I0729 20:10:00.877451  755599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-344518"
	I0729 20:10:00.877497  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:00.877578  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:00.877922  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.877975  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.877922  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.878077  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.893520  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0729 20:10:00.893530  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43087
	I0729 20:10:00.893998  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.894081  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.894569  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.894581  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.894593  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.894598  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.894956  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.894969  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.895228  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.895495  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.895530  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.897514  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:10:00.897876  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 20:10:00.898435  755599 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 20:10:00.898810  755599 addons.go:234] Setting addon default-storageclass=true in "ha-344518"
	I0729 20:10:00.898860  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:00.899242  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.899284  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.911267  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0729 20:10:00.911791  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.912332  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.912354  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.912736  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.912957  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.913755  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I0729 20:10:00.914382  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.914921  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.914945  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.914984  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:00.915304  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.915786  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.915821  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.917371  755599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:10:00.918684  755599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:10:00.918700  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 20:10:00.918715  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:00.921771  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.922271  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:00.922292  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.922546  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:00.922724  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:00.922883  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:00.923002  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:00.931534  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I0729 20:10:00.931981  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.932645  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.932670  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.933000  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.933185  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.934837  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:00.935030  755599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 20:10:00.935044  755599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 20:10:00.935058  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:00.937812  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.938208  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:00.938239  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.938366  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:00.938551  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:00.938709  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:00.938855  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:00.964873  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 20:10:01.029080  755599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:10:01.085656  755599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 20:10:01.389685  755599 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 20:10:01.617106  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617139  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617119  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617203  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617466  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617486  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617495  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617503  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617502  755599 main.go:141] libmachine: (ha-344518) DBG | Closing plugin on server side
	I0729 20:10:01.617468  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617521  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617530  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617537  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617817  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617831  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617832  755599 main.go:141] libmachine: (ha-344518) DBG | Closing plugin on server side
	I0729 20:10:01.617874  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617894  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.618040  755599 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 20:10:01.618048  755599 round_trippers.go:469] Request Headers:
	I0729 20:10:01.618058  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:10:01.618062  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:10:01.632115  755599 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 20:10:01.632861  755599 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 20:10:01.632878  755599 round_trippers.go:469] Request Headers:
	I0729 20:10:01.632888  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:10:01.632895  755599 round_trippers.go:473]     Content-Type: application/json
	I0729 20:10:01.632899  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:10:01.635477  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:10:01.635717  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.635741  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.636016  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.636045  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.637761  755599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 20:10:01.639042  755599 addons.go:510] duration metric: took 761.689784ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 20:10:01.639081  755599 start.go:246] waiting for cluster config update ...
	I0729 20:10:01.639105  755599 start.go:255] writing updated cluster config ...
	I0729 20:10:01.640969  755599 out.go:177] 
	I0729 20:10:01.641988  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:01.642051  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:01.643862  755599 out.go:177] * Starting "ha-344518-m02" control-plane node in "ha-344518" cluster
	I0729 20:10:01.645038  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:10:01.645067  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:10:01.645164  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:10:01.645177  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:10:01.645244  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:01.645427  755599 start.go:360] acquireMachinesLock for ha-344518-m02: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:10:01.645476  755599 start.go:364] duration metric: took 27.961µs to acquireMachinesLock for "ha-344518-m02"
	I0729 20:10:01.645496  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:01.645575  755599 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 20:10:01.647191  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:10:01.647291  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:01.647328  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:01.662983  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
	I0729 20:10:01.663480  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:01.664045  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:01.664072  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:01.664434  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:01.664664  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:01.664850  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:01.665032  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:10:01.665101  755599 client.go:168] LocalClient.Create starting
	I0729 20:10:01.665140  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:10:01.665178  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:10:01.665197  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:10:01.665270  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:10:01.665319  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:10:01.665339  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:10:01.665367  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:10:01.665377  755599 main.go:141] libmachine: (ha-344518-m02) Calling .PreCreateCheck
	I0729 20:10:01.665585  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:01.665951  755599 main.go:141] libmachine: Creating machine...
	I0729 20:10:01.665966  755599 main.go:141] libmachine: (ha-344518-m02) Calling .Create
	I0729 20:10:01.666103  755599 main.go:141] libmachine: (ha-344518-m02) Creating KVM machine...
	I0729 20:10:01.667399  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found existing default KVM network
	I0729 20:10:01.667524  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found existing private KVM network mk-ha-344518
	I0729 20:10:01.667685  755599 main.go:141] libmachine: (ha-344518-m02) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 ...
	I0729 20:10:01.667705  755599 main.go:141] libmachine: (ha-344518-m02) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:10:01.667733  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:01.667657  756009 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:10:01.667887  755599 main.go:141] libmachine: (ha-344518-m02) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:10:01.948848  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:01.948699  756009 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa...
	I0729 20:10:02.042832  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:02.042689  756009 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/ha-344518-m02.rawdisk...
	I0729 20:10:02.042863  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Writing magic tar header
	I0729 20:10:02.042878  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Writing SSH key tar header
	I0729 20:10:02.042960  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:02.042878  756009 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 ...
	I0729 20:10:02.043030  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02
	I0729 20:10:02.043050  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:10:02.043063  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 (perms=drwx------)
	I0729 20:10:02.043081  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:10:02.043093  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:10:02.043115  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:10:02.043128  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:10:02.043143  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:10:02.043157  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:10:02.043167  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:10:02.043182  755599 main.go:141] libmachine: (ha-344518-m02) Creating domain...
	I0729 20:10:02.043199  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:10:02.043213  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:10:02.043223  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home
	I0729 20:10:02.043234  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Skipping /home - not owner
	I0729 20:10:02.044411  755599 main.go:141] libmachine: (ha-344518-m02) define libvirt domain using xml: 
	I0729 20:10:02.044429  755599 main.go:141] libmachine: (ha-344518-m02) <domain type='kvm'>
	I0729 20:10:02.044439  755599 main.go:141] libmachine: (ha-344518-m02)   <name>ha-344518-m02</name>
	I0729 20:10:02.044446  755599 main.go:141] libmachine: (ha-344518-m02)   <memory unit='MiB'>2200</memory>
	I0729 20:10:02.044453  755599 main.go:141] libmachine: (ha-344518-m02)   <vcpu>2</vcpu>
	I0729 20:10:02.044459  755599 main.go:141] libmachine: (ha-344518-m02)   <features>
	I0729 20:10:02.044474  755599 main.go:141] libmachine: (ha-344518-m02)     <acpi/>
	I0729 20:10:02.044485  755599 main.go:141] libmachine: (ha-344518-m02)     <apic/>
	I0729 20:10:02.044495  755599 main.go:141] libmachine: (ha-344518-m02)     <pae/>
	I0729 20:10:02.044503  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.044529  755599 main.go:141] libmachine: (ha-344518-m02)   </features>
	I0729 20:10:02.044551  755599 main.go:141] libmachine: (ha-344518-m02)   <cpu mode='host-passthrough'>
	I0729 20:10:02.044558  755599 main.go:141] libmachine: (ha-344518-m02)   
	I0729 20:10:02.044569  755599 main.go:141] libmachine: (ha-344518-m02)   </cpu>
	I0729 20:10:02.044575  755599 main.go:141] libmachine: (ha-344518-m02)   <os>
	I0729 20:10:02.044581  755599 main.go:141] libmachine: (ha-344518-m02)     <type>hvm</type>
	I0729 20:10:02.044586  755599 main.go:141] libmachine: (ha-344518-m02)     <boot dev='cdrom'/>
	I0729 20:10:02.044655  755599 main.go:141] libmachine: (ha-344518-m02)     <boot dev='hd'/>
	I0729 20:10:02.044661  755599 main.go:141] libmachine: (ha-344518-m02)     <bootmenu enable='no'/>
	I0729 20:10:02.044666  755599 main.go:141] libmachine: (ha-344518-m02)   </os>
	I0729 20:10:02.044671  755599 main.go:141] libmachine: (ha-344518-m02)   <devices>
	I0729 20:10:02.044681  755599 main.go:141] libmachine: (ha-344518-m02)     <disk type='file' device='cdrom'>
	I0729 20:10:02.044697  755599 main.go:141] libmachine: (ha-344518-m02)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/boot2docker.iso'/>
	I0729 20:10:02.044708  755599 main.go:141] libmachine: (ha-344518-m02)       <target dev='hdc' bus='scsi'/>
	I0729 20:10:02.044717  755599 main.go:141] libmachine: (ha-344518-m02)       <readonly/>
	I0729 20:10:02.044722  755599 main.go:141] libmachine: (ha-344518-m02)     </disk>
	I0729 20:10:02.044756  755599 main.go:141] libmachine: (ha-344518-m02)     <disk type='file' device='disk'>
	I0729 20:10:02.044790  755599 main.go:141] libmachine: (ha-344518-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:10:02.044806  755599 main.go:141] libmachine: (ha-344518-m02)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/ha-344518-m02.rawdisk'/>
	I0729 20:10:02.044816  755599 main.go:141] libmachine: (ha-344518-m02)       <target dev='hda' bus='virtio'/>
	I0729 20:10:02.044822  755599 main.go:141] libmachine: (ha-344518-m02)     </disk>
	I0729 20:10:02.044831  755599 main.go:141] libmachine: (ha-344518-m02)     <interface type='network'>
	I0729 20:10:02.044838  755599 main.go:141] libmachine: (ha-344518-m02)       <source network='mk-ha-344518'/>
	I0729 20:10:02.044843  755599 main.go:141] libmachine: (ha-344518-m02)       <model type='virtio'/>
	I0729 20:10:02.044852  755599 main.go:141] libmachine: (ha-344518-m02)     </interface>
	I0729 20:10:02.044863  755599 main.go:141] libmachine: (ha-344518-m02)     <interface type='network'>
	I0729 20:10:02.044898  755599 main.go:141] libmachine: (ha-344518-m02)       <source network='default'/>
	I0729 20:10:02.044917  755599 main.go:141] libmachine: (ha-344518-m02)       <model type='virtio'/>
	I0729 20:10:02.044931  755599 main.go:141] libmachine: (ha-344518-m02)     </interface>
	I0729 20:10:02.044947  755599 main.go:141] libmachine: (ha-344518-m02)     <serial type='pty'>
	I0729 20:10:02.044960  755599 main.go:141] libmachine: (ha-344518-m02)       <target port='0'/>
	I0729 20:10:02.044971  755599 main.go:141] libmachine: (ha-344518-m02)     </serial>
	I0729 20:10:02.044984  755599 main.go:141] libmachine: (ha-344518-m02)     <console type='pty'>
	I0729 20:10:02.044996  755599 main.go:141] libmachine: (ha-344518-m02)       <target type='serial' port='0'/>
	I0729 20:10:02.045008  755599 main.go:141] libmachine: (ha-344518-m02)     </console>
	I0729 20:10:02.045028  755599 main.go:141] libmachine: (ha-344518-m02)     <rng model='virtio'>
	I0729 20:10:02.045040  755599 main.go:141] libmachine: (ha-344518-m02)       <backend model='random'>/dev/random</backend>
	I0729 20:10:02.045051  755599 main.go:141] libmachine: (ha-344518-m02)     </rng>
	I0729 20:10:02.045061  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.045075  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.045091  755599 main.go:141] libmachine: (ha-344518-m02)   </devices>
	I0729 20:10:02.045102  755599 main.go:141] libmachine: (ha-344518-m02) </domain>
	I0729 20:10:02.045114  755599 main.go:141] libmachine: (ha-344518-m02) 
	I0729 20:10:02.053275  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:0a:c1:60 in network default
	I0729 20:10:02.053938  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:02.053961  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring networks are active...
	I0729 20:10:02.054689  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring network default is active
	I0729 20:10:02.054979  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring network mk-ha-344518 is active
	I0729 20:10:02.055318  755599 main.go:141] libmachine: (ha-344518-m02) Getting domain xml...
	I0729 20:10:02.056051  755599 main.go:141] libmachine: (ha-344518-m02) Creating domain...
	I0729 20:10:03.314645  755599 main.go:141] libmachine: (ha-344518-m02) Waiting to get IP...
	I0729 20:10:03.315559  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.316122  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.316150  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.316088  756009 retry.go:31] will retry after 216.191206ms: waiting for machine to come up
	I0729 20:10:03.533518  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.533951  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.533974  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.533889  756009 retry.go:31] will retry after 265.56964ms: waiting for machine to come up
	I0729 20:10:03.801430  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.801916  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.801953  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.801874  756009 retry.go:31] will retry after 377.103233ms: waiting for machine to come up
	I0729 20:10:04.180447  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:04.180994  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:04.181028  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:04.180923  756009 retry.go:31] will retry after 575.646899ms: waiting for machine to come up
	I0729 20:10:04.758309  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:04.758860  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:04.758893  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:04.758784  756009 retry.go:31] will retry after 493.74167ms: waiting for machine to come up
	I0729 20:10:05.254611  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:05.255019  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:05.255049  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:05.254958  756009 retry.go:31] will retry after 573.46082ms: waiting for machine to come up
	I0729 20:10:05.829842  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:05.830364  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:05.830393  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:05.830239  756009 retry.go:31] will retry after 958.136426ms: waiting for machine to come up
	I0729 20:10:06.790708  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:06.791203  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:06.791233  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:06.791140  756009 retry.go:31] will retry after 1.232792133s: waiting for machine to come up
	I0729 20:10:08.025788  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:08.026198  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:08.026221  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:08.026156  756009 retry.go:31] will retry after 1.770457566s: waiting for machine to come up
	I0729 20:10:09.797886  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:09.798308  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:09.798331  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:09.798245  756009 retry.go:31] will retry after 1.820441853s: waiting for machine to come up
	I0729 20:10:11.621110  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:11.621620  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:11.621650  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:11.621571  756009 retry.go:31] will retry after 1.80956907s: waiting for machine to come up
	I0729 20:10:13.433238  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:13.433725  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:13.433747  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:13.433687  756009 retry.go:31] will retry after 3.393381444s: waiting for machine to come up
	I0729 20:10:16.828308  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:16.828715  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:16.828745  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:16.828640  756009 retry.go:31] will retry after 4.18008266s: waiting for machine to come up
	I0729 20:10:21.014071  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.014671  755599 main.go:141] libmachine: (ha-344518-m02) Found IP for machine: 192.168.39.104
	I0729 20:10:21.014702  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has current primary IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.014712  755599 main.go:141] libmachine: (ha-344518-m02) Reserving static IP address...
	I0729 20:10:21.015170  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find host DHCP lease matching {name: "ha-344518-m02", mac: "52:54:00:24:a4:74", ip: "192.168.39.104"} in network mk-ha-344518
	I0729 20:10:21.094510  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Getting to WaitForSSH function...
	I0729 20:10:21.094544  755599 main.go:141] libmachine: (ha-344518-m02) Reserved static IP address: 192.168.39.104
	I0729 20:10:21.094557  755599 main.go:141] libmachine: (ha-344518-m02) Waiting for SSH to be available...
	I0729 20:10:21.097713  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.098116  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518
	I0729 20:10:21.098145  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find defined IP address of network mk-ha-344518 interface with MAC address 52:54:00:24:a4:74
	I0729 20:10:21.098311  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH client type: external
	I0729 20:10:21.098345  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa (-rw-------)
	I0729 20:10:21.098398  755599 main.go:141] libmachine: (ha-344518-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:10:21.098414  755599 main.go:141] libmachine: (ha-344518-m02) DBG | About to run SSH command:
	I0729 20:10:21.098428  755599 main.go:141] libmachine: (ha-344518-m02) DBG | exit 0
	I0729 20:10:21.102481  755599 main.go:141] libmachine: (ha-344518-m02) DBG | SSH cmd err, output: exit status 255: 
	I0729 20:10:21.102510  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 20:10:21.102520  755599 main.go:141] libmachine: (ha-344518-m02) DBG | command : exit 0
	I0729 20:10:21.102526  755599 main.go:141] libmachine: (ha-344518-m02) DBG | err     : exit status 255
	I0729 20:10:21.102533  755599 main.go:141] libmachine: (ha-344518-m02) DBG | output  : 
	I0729 20:10:24.104783  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Getting to WaitForSSH function...
	I0729 20:10:24.107452  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.109207  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.109238  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.109444  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH client type: external
	I0729 20:10:24.109486  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa (-rw-------)
	I0729 20:10:24.109531  755599 main.go:141] libmachine: (ha-344518-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:10:24.109544  755599 main.go:141] libmachine: (ha-344518-m02) DBG | About to run SSH command:
	I0729 20:10:24.109554  755599 main.go:141] libmachine: (ha-344518-m02) DBG | exit 0
	I0729 20:10:24.236129  755599 main.go:141] libmachine: (ha-344518-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 20:10:24.236436  755599 main.go:141] libmachine: (ha-344518-m02) KVM machine creation complete!
	I0729 20:10:24.236803  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:24.237362  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:24.237553  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:24.237733  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:10:24.237750  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:10:24.239100  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:10:24.239117  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:10:24.239127  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:10:24.239133  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.241257  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.241549  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.241575  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.241720  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.241890  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.242053  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.242162  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.242305  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.242571  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.242584  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:10:24.347201  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:10:24.347227  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:10:24.347240  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.349886  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.350239  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.350272  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.350403  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.350641  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.350839  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.350978  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.351152  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.351344  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.351357  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:10:24.456711  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:10:24.456786  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:10:24.456792  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:10:24.456803  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.457088  755599 buildroot.go:166] provisioning hostname "ha-344518-m02"
	I0729 20:10:24.457126  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.457361  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.460181  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.460520  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.460548  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.460715  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.460895  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.461030  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.461168  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.461371  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.461529  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.461543  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518-m02 && echo "ha-344518-m02" | sudo tee /etc/hostname
	I0729 20:10:24.577536  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518-m02
	
	I0729 20:10:24.577590  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.580462  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.580900  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.580938  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.581111  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.581325  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.581510  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.581664  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.581841  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.582052  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.582077  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:10:24.691991  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:10:24.692024  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:10:24.692057  755599 buildroot.go:174] setting up certificates
	I0729 20:10:24.692073  755599 provision.go:84] configureAuth start
	I0729 20:10:24.692085  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.692410  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:24.695188  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.695571  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.695598  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.695709  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.698369  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.698656  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.698689  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.698891  755599 provision.go:143] copyHostCerts
	I0729 20:10:24.698936  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:10:24.698984  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:10:24.698999  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:10:24.699086  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:10:24.699186  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:10:24.699214  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:10:24.699226  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:10:24.699270  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:10:24.699347  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:10:24.699374  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:10:24.699384  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:10:24.699422  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:10:24.699525  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518-m02 san=[127.0.0.1 192.168.39.104 ha-344518-m02 localhost minikube]
	I0729 20:10:24.871405  755599 provision.go:177] copyRemoteCerts
	I0729 20:10:24.871465  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:10:24.871491  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.874120  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.874490  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.874518  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.874708  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.874892  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.875026  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.875127  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:24.957261  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:10:24.957348  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 20:10:24.979592  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:10:24.979666  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:10:25.003753  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:10:25.003829  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:10:25.026533  755599 provision.go:87] duration metric: took 334.440906ms to configureAuth
	I0729 20:10:25.026563  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:10:25.026768  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:25.026860  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.029681  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.030032  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.030062  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.030231  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.030442  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.030680  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.030845  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.031036  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:25.031231  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:25.031248  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:10:25.287841  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:10:25.287881  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:10:25.287892  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetURL
	I0729 20:10:25.289359  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using libvirt version 6000000
	I0729 20:10:25.291673  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.291986  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.292006  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.292206  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:10:25.292228  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:10:25.292238  755599 client.go:171] duration metric: took 23.627123397s to LocalClient.Create
	I0729 20:10:25.292268  755599 start.go:167] duration metric: took 23.627239186s to libmachine.API.Create "ha-344518"
	I0729 20:10:25.292280  755599 start.go:293] postStartSetup for "ha-344518-m02" (driver="kvm2")
	I0729 20:10:25.292298  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:10:25.292321  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.292615  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:10:25.292640  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.294790  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.295171  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.295196  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.295456  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.295660  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.295881  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.296078  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.377381  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:10:25.381145  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:10:25.381171  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:10:25.381232  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:10:25.381303  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:10:25.381317  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:10:25.381396  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:10:25.389692  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:10:25.410728  755599 start.go:296] duration metric: took 118.430621ms for postStartSetup
	I0729 20:10:25.410777  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:25.411419  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:25.414097  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.414403  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.414427  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.414640  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:25.414835  755599 start.go:128] duration metric: took 23.769249347s to createHost
	I0729 20:10:25.414860  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.417227  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.417587  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.417614  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.417752  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.417947  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.418109  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.418226  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.418399  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:25.418563  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:25.418573  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:10:25.524151  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283825.502045174
	
	I0729 20:10:25.524175  755599 fix.go:216] guest clock: 1722283825.502045174
	I0729 20:10:25.524182  755599 fix.go:229] Guest: 2024-07-29 20:10:25.502045174 +0000 UTC Remote: 2024-07-29 20:10:25.41484648 +0000 UTC m=+79.220118978 (delta=87.198694ms)
	I0729 20:10:25.524200  755599 fix.go:200] guest clock delta is within tolerance: 87.198694ms
	I0729 20:10:25.524205  755599 start.go:83] releasing machines lock for "ha-344518-m02", held for 23.878719016s
	I0729 20:10:25.524222  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.524541  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:25.527237  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.527733  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.527764  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.530408  755599 out.go:177] * Found network options:
	I0729 20:10:25.531705  755599 out.go:177]   - NO_PROXY=192.168.39.238
	W0729 20:10:25.533019  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:10:25.533051  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533605  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533811  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533872  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:10:25.533923  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	W0729 20:10:25.534007  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:10:25.534071  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:10:25.534087  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.536706  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.536859  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537171  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.537210  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537244  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.537267  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537292  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.537458  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.537530  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.537676  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.537686  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.537850  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.537853  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.538014  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.766802  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:10:25.773216  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:10:25.773298  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:10:25.788075  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:10:25.788098  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:10:25.788173  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:10:25.803257  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:10:25.815595  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:10:25.815656  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:10:25.827786  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:10:25.839741  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:10:25.947907  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:10:26.097011  755599 docker.go:232] disabling docker service ...
	I0729 20:10:26.097103  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:10:26.112088  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:10:26.123704  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:10:26.259181  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:10:26.384791  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:10:26.398652  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:10:26.415590  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:10:26.415736  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.425383  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:10:26.425459  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.435502  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.445453  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.455330  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:10:26.465058  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.474443  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.490588  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.500191  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:10:26.509079  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:10:26.509129  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:10:26.521633  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:10:26.530264  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:10:26.643004  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:10:26.771247  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:10:26.771338  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:10:26.775753  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:10:26.775817  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:10:26.779060  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:10:26.817831  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:10:26.817925  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:10:26.844818  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:10:26.872041  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:10:26.873343  755599 out.go:177]   - env NO_PROXY=192.168.39.238
	I0729 20:10:26.874356  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:26.877071  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:26.877476  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:26.877507  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:26.877722  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:10:26.881724  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:10:26.893411  755599 mustload.go:65] Loading cluster: ha-344518
	I0729 20:10:26.893636  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:26.893884  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:26.893911  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:26.908995  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0729 20:10:26.909477  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:26.909979  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:26.909999  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:26.910377  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:26.910605  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:26.912275  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:26.912551  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:26.912586  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:26.927672  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I0729 20:10:26.928131  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:26.928640  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:26.928674  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:26.928989  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:26.929203  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:26.929375  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.104
	I0729 20:10:26.929393  755599 certs.go:194] generating shared ca certs ...
	I0729 20:10:26.929414  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:26.929568  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:10:26.929624  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:10:26.929638  755599 certs.go:256] generating profile certs ...
	I0729 20:10:26.929723  755599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:10:26.929755  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c
	I0729 20:10:26.929777  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.254]
	I0729 20:10:27.084609  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c ...
	I0729 20:10:27.084645  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c: {Name:mk29d4e2061830b1c1b84d575042ae4e1f4241e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:27.084855  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c ...
	I0729 20:10:27.084881  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c: {Name:mkc6e883c708deef6aeae601dff0685e5bf5a37e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:27.084986  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:10:27.085110  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:10:27.085235  755599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:10:27.085252  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:10:27.085265  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:10:27.085275  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:10:27.085284  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:10:27.085293  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:10:27.085303  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:10:27.085317  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:10:27.085329  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:10:27.085380  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:10:27.085408  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:10:27.085418  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:10:27.085437  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:10:27.085461  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:10:27.085482  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:10:27.085519  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:10:27.085550  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.085564  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.085574  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.085607  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:27.088743  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:27.089194  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:27.089221  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:27.089373  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:27.089637  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:27.089875  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:27.090016  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:27.160383  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 20:10:27.165482  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 20:10:27.177116  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 20:10:27.180867  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 20:10:27.190955  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 20:10:27.195465  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 20:10:27.207980  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 20:10:27.212601  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 20:10:27.223762  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 20:10:27.227649  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 20:10:27.239175  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 20:10:27.243062  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 20:10:27.254138  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:10:27.277250  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:10:27.298527  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:10:27.320138  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:10:27.341900  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 20:10:27.363590  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:10:27.384311  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:10:27.406025  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:10:27.427706  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:10:27.449641  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:10:27.470422  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:10:27.491984  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 20:10:27.507715  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 20:10:27.522149  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 20:10:27.536851  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 20:10:27.551846  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 20:10:27.566320  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 20:10:27.581928  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 20:10:27.596720  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:10:27.601908  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:10:27.611390  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.615117  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.615172  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.620397  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:10:27.629882  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:10:27.639528  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.643992  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.644044  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.649601  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:10:27.659697  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:10:27.670737  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.674609  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.674661  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.679622  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:10:27.689228  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:10:27.692969  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:10:27.693030  755599 kubeadm.go:934] updating node {m02 192.168.39.104 8443 v1.30.3 crio true true} ...
	I0729 20:10:27.693142  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:10:27.693185  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:10:27.693228  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:10:27.709929  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:10:27.710066  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:10:27.710136  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:10:27.719623  755599 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 20:10:27.719672  755599 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 20:10:27.728587  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 20:10:27.728612  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:10:27.728725  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:10:27.728731  755599 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 20:10:27.728748  755599 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 20:10:27.732694  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 20:10:27.732720  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 20:10:51.375168  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:10:51.390438  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:10:51.390532  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:10:51.394517  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 20:10:51.394562  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 20:10:55.678731  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:10:55.678817  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:10:55.683573  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 20:10:55.683612  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 20:10:55.894374  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 20:10:55.903261  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 20:10:55.918748  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:10:55.933816  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:10:55.949545  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:10:55.953144  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:10:55.964404  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:10:56.104009  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:10:56.119957  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:56.120359  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:56.120412  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:56.136417  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0729 20:10:56.137139  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:56.137667  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:56.137697  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:56.138069  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:56.138287  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:56.138491  755599 start.go:317] joinCluster: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:10:56.138598  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 20:10:56.138616  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:56.141591  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:56.142018  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:56.142052  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:56.142160  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:56.142327  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:56.142468  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:56.142598  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:56.290973  755599 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:56.291035  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e7gstn.n8706rcnpqrltanw --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443"
	I0729 20:11:17.128944  755599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e7gstn.n8706rcnpqrltanw --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443": (20.837864117s)
	I0729 20:11:17.128995  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 20:11:17.551849  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518-m02 minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=false
	I0729 20:11:17.679581  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344518-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 20:11:17.805324  755599 start.go:319] duration metric: took 21.666815728s to joinCluster
	I0729 20:11:17.805405  755599 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:11:17.805786  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:11:17.806806  755599 out.go:177] * Verifying Kubernetes components...
	I0729 20:11:17.808089  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:11:18.095735  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:11:18.134901  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:11:18.135217  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 20:11:18.135285  755599 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0729 20:11:18.135526  755599 node_ready.go:35] waiting up to 6m0s for node "ha-344518-m02" to be "Ready" ...
	I0729 20:11:18.135670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:18.135679  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:18.135687  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:18.135691  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:18.147378  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:11:18.636249  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:18.636273  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:18.636285  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:18.636291  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:18.646266  755599 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 20:11:19.136128  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:19.136150  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:19.136159  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:19.136164  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:19.147756  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:11:19.635948  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:19.635972  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:19.635981  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:19.635984  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:19.640942  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:20.136731  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:20.136761  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:20.136772  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:20.136777  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:20.140265  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:20.140809  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:20.636148  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:20.636180  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:20.636193  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:20.636206  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:20.639303  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:21.136214  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:21.136240  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:21.136251  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:21.136256  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:21.140463  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:21.636460  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:21.636487  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:21.636498  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:21.636507  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:21.641290  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:22.136131  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:22.136161  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:22.136173  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:22.136177  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:22.140134  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:22.141047  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:22.636528  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:22.636555  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:22.636564  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:22.636569  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:22.639898  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:23.135732  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:23.135756  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:23.135765  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:23.135768  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:23.140090  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:23.636392  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:23.636415  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:23.636424  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:23.636429  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:23.640238  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:24.136187  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:24.136217  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:24.136230  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:24.136236  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:24.140483  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:24.141765  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:24.636096  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:24.636123  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:24.636139  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:24.636143  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:24.639749  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:25.136794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:25.136821  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:25.136835  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:25.136840  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:25.139992  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:25.636085  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:25.636114  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:25.636124  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:25.636129  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:25.639968  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:26.136183  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:26.136205  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:26.136214  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:26.136219  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:26.140418  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:26.636014  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:26.636059  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:26.636072  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:26.636077  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:26.638981  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:26.639565  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:27.136721  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:27.136746  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:27.136755  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:27.136758  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:27.139799  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:27.636688  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:27.636713  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:27.636724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:27.636729  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:27.640442  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:28.136515  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:28.136539  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:28.136549  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:28.136554  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:28.139904  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:28.635870  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:28.635896  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:28.635911  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:28.635916  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:28.639967  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:28.640906  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:29.136398  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:29.136425  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:29.136438  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:29.136445  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:29.139879  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:29.635797  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:29.635823  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:29.635832  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:29.635835  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:29.639077  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:30.135917  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:30.135940  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:30.135949  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:30.135954  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:30.139150  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:30.636125  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:30.636149  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:30.636157  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:30.636167  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:30.640183  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:31.136355  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:31.136383  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:31.136393  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:31.136398  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:31.139422  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:31.140057  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:31.636221  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:31.636248  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:31.636259  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:31.636264  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:31.639513  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:32.136239  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:32.136269  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:32.136282  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:32.136287  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:32.139706  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:32.636649  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:32.636675  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:32.636684  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:32.636688  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:32.639427  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:33.135916  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:33.135952  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:33.135974  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:33.135981  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:33.139013  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:33.635978  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:33.636004  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:33.636012  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:33.636015  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:33.639302  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:33.639813  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:34.136146  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:34.136170  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:34.136178  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:34.136181  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:34.139054  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:34.635839  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:34.635862  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:34.635872  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:34.635876  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:34.638657  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.136639  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.136661  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.136670  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.136675  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.139916  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:35.636783  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.636809  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.636817  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.636822  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.641351  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.641979  755599 node_ready.go:49] node "ha-344518-m02" has status "Ready":"True"
	I0729 20:11:35.642004  755599 node_ready.go:38] duration metric: took 17.506442147s for node "ha-344518-m02" to be "Ready" ...
	I0729 20:11:35.642021  755599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:11:35.642130  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:35.642142  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.642152  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.642159  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.648575  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.654259  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.654343  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wzmc5
	I0729 20:11:35.654352  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.654359  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.654363  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.657016  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.657591  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.657608  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.657617  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.657623  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.663743  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.664176  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.664194  755599 pod_ready.go:81] duration metric: took 9.912821ms for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.664203  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.664254  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xpkp6
	I0729 20:11:35.664261  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.664268  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.664276  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.666649  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.667297  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.667317  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.667324  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.667328  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.674714  755599 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 20:11:35.675140  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.675166  755599 pod_ready.go:81] duration metric: took 10.95765ms for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.675175  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.675222  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518
	I0729 20:11:35.675229  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.675235  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.675241  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.681307  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.681915  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.681928  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.681936  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.681940  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.689894  755599 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 20:11:35.690323  755599 pod_ready.go:92] pod "etcd-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.690343  755599 pod_ready.go:81] duration metric: took 15.162322ms for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.690353  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.690412  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m02
	I0729 20:11:35.690424  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.690432  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.690436  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.695233  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.695795  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.695808  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.695815  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.695819  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.700061  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.700537  755599 pod_ready.go:92] pod "etcd-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.700553  755599 pod_ready.go:81] duration metric: took 10.194192ms for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.700572  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.836896  755599 request.go:629] Waited for 136.251612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:11:35.836997  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:11:35.837004  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.837014  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.837021  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.840842  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.036850  755599 request.go:629] Waited for 195.30679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.036925  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.036931  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.036939  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.036943  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.040840  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.041535  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.041555  755599 pod_ready.go:81] duration metric: took 340.975746ms for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.041564  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.237543  755599 request.go:629] Waited for 195.904869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:11:36.237615  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:11:36.237620  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.237628  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.237631  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.242184  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:36.437369  755599 request.go:629] Waited for 194.358026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:36.437444  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:36.437453  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.437465  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.437474  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.440851  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.441387  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.441409  755599 pod_ready.go:81] duration metric: took 399.837907ms for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.441419  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.637439  755599 request.go:629] Waited for 195.923012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:11:36.637526  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:11:36.637533  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.637541  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.637546  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.641074  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.837218  755599 request.go:629] Waited for 195.381667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.837280  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.837285  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.837292  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.837297  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.840676  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.841190  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.841209  755599 pod_ready.go:81] duration metric: took 399.783358ms for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.841218  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.037339  755599 request.go:629] Waited for 196.004131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:11:37.037424  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:11:37.037433  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.037444  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.037451  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.040956  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.236893  755599 request.go:629] Waited for 195.334849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:37.236976  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:37.236981  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.236990  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.236994  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.240332  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.240828  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:37.240850  755599 pod_ready.go:81] duration metric: took 399.625522ms for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.240860  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.437122  755599 request.go:629] Waited for 196.165968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:11:37.437190  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:11:37.437196  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.437204  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.437209  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.440918  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.636903  755599 request.go:629] Waited for 195.291062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:37.636969  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:37.636975  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.636983  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.636987  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.640607  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.641309  755599 pod_ready.go:92] pod "kube-proxy-fh6rg" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:37.641340  755599 pod_ready.go:81] duration metric: took 400.472066ms for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.641354  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.837231  755599 request.go:629] Waited for 195.789027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:11:37.837305  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:11:37.837310  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.837319  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.837330  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.841791  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:38.037794  755599 request.go:629] Waited for 195.32069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.037877  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.037884  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.037897  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.037908  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.040965  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.041467  755599 pod_ready.go:92] pod "kube-proxy-nfxp2" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.041490  755599 pod_ready.go:81] duration metric: took 400.124155ms for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.041501  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.237576  755599 request.go:629] Waited for 195.990661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:11:38.237667  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:11:38.237674  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.237684  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.237692  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.241059  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.436895  755599 request.go:629] Waited for 195.307559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:38.436965  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:38.436971  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.436979  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.436983  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.440744  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.441468  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.441489  755599 pod_ready.go:81] duration metric: took 399.982414ms for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.441500  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.637663  755599 request.go:629] Waited for 196.070509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:11:38.637738  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:11:38.637743  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.637751  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.637757  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.641143  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.837180  755599 request.go:629] Waited for 195.409472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.837241  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.837246  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.837254  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.837260  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.840552  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.841040  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.841059  755599 pod_ready.go:81] duration metric: took 399.552687ms for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.841071  755599 pod_ready.go:38] duration metric: took 3.199004886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:11:38.841087  755599 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:11:38.841138  755599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:11:38.857306  755599 api_server.go:72] duration metric: took 21.051860743s to wait for apiserver process to appear ...
	I0729 20:11:38.857336  755599 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:11:38.857353  755599 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0729 20:11:38.861608  755599 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0729 20:11:38.861691  755599 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0729 20:11:38.861696  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.861707  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.861713  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.862688  755599 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 20:11:38.862770  755599 api_server.go:141] control plane version: v1.30.3
	I0729 20:11:38.862786  755599 api_server.go:131] duration metric: took 5.444906ms to wait for apiserver health ...
	I0729 20:11:38.862794  755599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:11:39.037210  755599 request.go:629] Waited for 174.346456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.037286  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.037294  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.037303  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.037317  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.042538  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:11:39.046435  755599 system_pods.go:59] 17 kube-system pods found
	I0729 20:11:39.046462  755599 system_pods.go:61] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:11:39.046467  755599 system_pods.go:61] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:11:39.046471  755599 system_pods.go:61] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:11:39.046474  755599 system_pods.go:61] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:11:39.046477  755599 system_pods.go:61] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:11:39.046480  755599 system_pods.go:61] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:11:39.046482  755599 system_pods.go:61] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:11:39.046485  755599 system_pods.go:61] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:11:39.046490  755599 system_pods.go:61] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:11:39.046495  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:11:39.046499  755599 system_pods.go:61] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:11:39.046503  755599 system_pods.go:61] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:11:39.046508  755599 system_pods.go:61] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:11:39.046515  755599 system_pods.go:61] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:11:39.046519  755599 system_pods.go:61] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:11:39.046527  755599 system_pods.go:61] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:11:39.046531  755599 system_pods.go:61] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:11:39.046541  755599 system_pods.go:74] duration metric: took 183.73745ms to wait for pod list to return data ...
	I0729 20:11:39.046552  755599 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:11:39.236913  755599 request.go:629] Waited for 190.266141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:11:39.236988  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:11:39.236993  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.237000  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.237004  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.240352  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:39.240640  755599 default_sa.go:45] found service account: "default"
	I0729 20:11:39.240662  755599 default_sa.go:55] duration metric: took 194.099747ms for default service account to be created ...
	I0729 20:11:39.240676  755599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 20:11:39.436967  755599 request.go:629] Waited for 196.206967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.437065  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.437073  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.437087  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.437093  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.442716  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:11:39.448741  755599 system_pods.go:86] 17 kube-system pods found
	I0729 20:11:39.448770  755599 system_pods.go:89] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:11:39.448776  755599 system_pods.go:89] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:11:39.448780  755599 system_pods.go:89] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:11:39.448784  755599 system_pods.go:89] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:11:39.448787  755599 system_pods.go:89] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:11:39.448791  755599 system_pods.go:89] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:11:39.448795  755599 system_pods.go:89] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:11:39.448799  755599 system_pods.go:89] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:11:39.448803  755599 system_pods.go:89] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:11:39.448807  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:11:39.448811  755599 system_pods.go:89] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:11:39.448814  755599 system_pods.go:89] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:11:39.448818  755599 system_pods.go:89] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:11:39.448824  755599 system_pods.go:89] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:11:39.448829  755599 system_pods.go:89] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:11:39.448832  755599 system_pods.go:89] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:11:39.448835  755599 system_pods.go:89] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:11:39.448846  755599 system_pods.go:126] duration metric: took 208.165158ms to wait for k8s-apps to be running ...
	I0729 20:11:39.448856  755599 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 20:11:39.448902  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:11:39.463783  755599 system_svc.go:56] duration metric: took 14.917659ms WaitForService to wait for kubelet
	I0729 20:11:39.463816  755599 kubeadm.go:582] duration metric: took 21.658372656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:11:39.463843  755599 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:11:39.637314  755599 request.go:629] Waited for 173.376861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0729 20:11:39.637401  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0729 20:11:39.637409  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.637424  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.637429  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.641524  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:39.642312  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:11:39.642367  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:11:39.642380  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:11:39.642385  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:11:39.642390  755599 node_conditions.go:105] duration metric: took 178.541559ms to run NodePressure ...
	I0729 20:11:39.642409  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:11:39.642436  755599 start.go:255] writing updated cluster config ...
	I0729 20:11:39.644658  755599 out.go:177] 
	I0729 20:11:39.646062  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:11:39.646162  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:11:39.647836  755599 out.go:177] * Starting "ha-344518-m03" control-plane node in "ha-344518" cluster
	I0729 20:11:39.649307  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:11:39.649335  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:11:39.649443  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:11:39.649458  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:11:39.649554  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:11:39.649742  755599 start.go:360] acquireMachinesLock for ha-344518-m03: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:11:39.649796  755599 start.go:364] duration metric: took 31.452µs to acquireMachinesLock for "ha-344518-m03"
	I0729 20:11:39.649821  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:11:39.649951  755599 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 20:11:39.651593  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:11:39.651686  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:11:39.651721  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:11:39.669410  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0729 20:11:39.669889  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:11:39.670566  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:11:39.670591  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:11:39.671030  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:11:39.671229  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:11:39.671458  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:11:39.671646  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:11:39.671680  755599 client.go:168] LocalClient.Create starting
	I0729 20:11:39.671719  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:11:39.671780  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:11:39.671804  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:11:39.671867  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:11:39.671898  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:11:39.671914  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:11:39.671948  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:11:39.671959  755599 main.go:141] libmachine: (ha-344518-m03) Calling .PreCreateCheck
	I0729 20:11:39.672165  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:11:39.672560  755599 main.go:141] libmachine: Creating machine...
	I0729 20:11:39.672575  755599 main.go:141] libmachine: (ha-344518-m03) Calling .Create
	I0729 20:11:39.672744  755599 main.go:141] libmachine: (ha-344518-m03) Creating KVM machine...
	I0729 20:11:39.673982  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found existing default KVM network
	I0729 20:11:39.674123  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found existing private KVM network mk-ha-344518
	I0729 20:11:39.674349  755599 main.go:141] libmachine: (ha-344518-m03) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 ...
	I0729 20:11:39.674385  755599 main.go:141] libmachine: (ha-344518-m03) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:11:39.674468  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:39.674363  756503 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:11:39.674586  755599 main.go:141] libmachine: (ha-344518-m03) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:11:39.952405  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:39.952249  756503 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa...
	I0729 20:11:40.015841  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:40.015702  756503 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/ha-344518-m03.rawdisk...
	I0729 20:11:40.015883  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Writing magic tar header
	I0729 20:11:40.015901  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Writing SSH key tar header
	I0729 20:11:40.015914  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:40.015819  756503 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 ...
	I0729 20:11:40.015980  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03
	I0729 20:11:40.016020  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 (perms=drwx------)
	I0729 20:11:40.016053  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:11:40.016069  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:11:40.016090  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:11:40.016102  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:11:40.016115  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:11:40.016131  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:11:40.016144  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:11:40.016155  755599 main.go:141] libmachine: (ha-344518-m03) Creating domain...
	I0729 20:11:40.016175  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:11:40.016193  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:11:40.016205  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:11:40.016215  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home
	I0729 20:11:40.016225  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Skipping /home - not owner
	I0729 20:11:40.017000  755599 main.go:141] libmachine: (ha-344518-m03) define libvirt domain using xml: 
	I0729 20:11:40.017019  755599 main.go:141] libmachine: (ha-344518-m03) <domain type='kvm'>
	I0729 20:11:40.017030  755599 main.go:141] libmachine: (ha-344518-m03)   <name>ha-344518-m03</name>
	I0729 20:11:40.017038  755599 main.go:141] libmachine: (ha-344518-m03)   <memory unit='MiB'>2200</memory>
	I0729 20:11:40.017048  755599 main.go:141] libmachine: (ha-344518-m03)   <vcpu>2</vcpu>
	I0729 20:11:40.017064  755599 main.go:141] libmachine: (ha-344518-m03)   <features>
	I0729 20:11:40.017077  755599 main.go:141] libmachine: (ha-344518-m03)     <acpi/>
	I0729 20:11:40.017087  755599 main.go:141] libmachine: (ha-344518-m03)     <apic/>
	I0729 20:11:40.017100  755599 main.go:141] libmachine: (ha-344518-m03)     <pae/>
	I0729 20:11:40.017111  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017122  755599 main.go:141] libmachine: (ha-344518-m03)   </features>
	I0729 20:11:40.017134  755599 main.go:141] libmachine: (ha-344518-m03)   <cpu mode='host-passthrough'>
	I0729 20:11:40.017155  755599 main.go:141] libmachine: (ha-344518-m03)   
	I0729 20:11:40.017172  755599 main.go:141] libmachine: (ha-344518-m03)   </cpu>
	I0729 20:11:40.017202  755599 main.go:141] libmachine: (ha-344518-m03)   <os>
	I0729 20:11:40.017226  755599 main.go:141] libmachine: (ha-344518-m03)     <type>hvm</type>
	I0729 20:11:40.017236  755599 main.go:141] libmachine: (ha-344518-m03)     <boot dev='cdrom'/>
	I0729 20:11:40.017251  755599 main.go:141] libmachine: (ha-344518-m03)     <boot dev='hd'/>
	I0729 20:11:40.017261  755599 main.go:141] libmachine: (ha-344518-m03)     <bootmenu enable='no'/>
	I0729 20:11:40.017271  755599 main.go:141] libmachine: (ha-344518-m03)   </os>
	I0729 20:11:40.017300  755599 main.go:141] libmachine: (ha-344518-m03)   <devices>
	I0729 20:11:40.017312  755599 main.go:141] libmachine: (ha-344518-m03)     <disk type='file' device='cdrom'>
	I0729 20:11:40.017325  755599 main.go:141] libmachine: (ha-344518-m03)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/boot2docker.iso'/>
	I0729 20:11:40.017340  755599 main.go:141] libmachine: (ha-344518-m03)       <target dev='hdc' bus='scsi'/>
	I0729 20:11:40.017351  755599 main.go:141] libmachine: (ha-344518-m03)       <readonly/>
	I0729 20:11:40.017362  755599 main.go:141] libmachine: (ha-344518-m03)     </disk>
	I0729 20:11:40.017373  755599 main.go:141] libmachine: (ha-344518-m03)     <disk type='file' device='disk'>
	I0729 20:11:40.017385  755599 main.go:141] libmachine: (ha-344518-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:11:40.017400  755599 main.go:141] libmachine: (ha-344518-m03)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/ha-344518-m03.rawdisk'/>
	I0729 20:11:40.017411  755599 main.go:141] libmachine: (ha-344518-m03)       <target dev='hda' bus='virtio'/>
	I0729 20:11:40.017426  755599 main.go:141] libmachine: (ha-344518-m03)     </disk>
	I0729 20:11:40.017446  755599 main.go:141] libmachine: (ha-344518-m03)     <interface type='network'>
	I0729 20:11:40.017459  755599 main.go:141] libmachine: (ha-344518-m03)       <source network='mk-ha-344518'/>
	I0729 20:11:40.017472  755599 main.go:141] libmachine: (ha-344518-m03)       <model type='virtio'/>
	I0729 20:11:40.017483  755599 main.go:141] libmachine: (ha-344518-m03)     </interface>
	I0729 20:11:40.017496  755599 main.go:141] libmachine: (ha-344518-m03)     <interface type='network'>
	I0729 20:11:40.017509  755599 main.go:141] libmachine: (ha-344518-m03)       <source network='default'/>
	I0729 20:11:40.017526  755599 main.go:141] libmachine: (ha-344518-m03)       <model type='virtio'/>
	I0729 20:11:40.017538  755599 main.go:141] libmachine: (ha-344518-m03)     </interface>
	I0729 20:11:40.017557  755599 main.go:141] libmachine: (ha-344518-m03)     <serial type='pty'>
	I0729 20:11:40.017576  755599 main.go:141] libmachine: (ha-344518-m03)       <target port='0'/>
	I0729 20:11:40.017587  755599 main.go:141] libmachine: (ha-344518-m03)     </serial>
	I0729 20:11:40.017595  755599 main.go:141] libmachine: (ha-344518-m03)     <console type='pty'>
	I0729 20:11:40.017607  755599 main.go:141] libmachine: (ha-344518-m03)       <target type='serial' port='0'/>
	I0729 20:11:40.017619  755599 main.go:141] libmachine: (ha-344518-m03)     </console>
	I0729 20:11:40.017633  755599 main.go:141] libmachine: (ha-344518-m03)     <rng model='virtio'>
	I0729 20:11:40.017647  755599 main.go:141] libmachine: (ha-344518-m03)       <backend model='random'>/dev/random</backend>
	I0729 20:11:40.017656  755599 main.go:141] libmachine: (ha-344518-m03)     </rng>
	I0729 20:11:40.017676  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017693  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017707  755599 main.go:141] libmachine: (ha-344518-m03)   </devices>
	I0729 20:11:40.017715  755599 main.go:141] libmachine: (ha-344518-m03) </domain>
	I0729 20:11:40.017728  755599 main.go:141] libmachine: (ha-344518-m03) 
	I0729 20:11:40.024354  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:c5:c3:3e in network default
	I0729 20:11:40.024921  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring networks are active...
	I0729 20:11:40.024940  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:40.025593  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring network default is active
	I0729 20:11:40.025843  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring network mk-ha-344518 is active
	I0729 20:11:40.026177  755599 main.go:141] libmachine: (ha-344518-m03) Getting domain xml...
	I0729 20:11:40.026814  755599 main.go:141] libmachine: (ha-344518-m03) Creating domain...
	I0729 20:11:41.266986  755599 main.go:141] libmachine: (ha-344518-m03) Waiting to get IP...
	I0729 20:11:41.267910  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.268388  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.268414  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.268321  756503 retry.go:31] will retry after 277.943575ms: waiting for machine to come up
	I0729 20:11:41.547760  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.548259  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.548291  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.548197  756503 retry.go:31] will retry after 314.191405ms: waiting for machine to come up
	I0729 20:11:41.863651  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.864119  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.864144  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.864073  756503 retry.go:31] will retry after 457.969852ms: waiting for machine to come up
	I0729 20:11:42.323737  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:42.324117  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:42.324143  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:42.324075  756503 retry.go:31] will retry after 497.585545ms: waiting for machine to come up
	I0729 20:11:42.823826  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:42.824310  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:42.824350  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:42.824264  756503 retry.go:31] will retry after 721.983704ms: waiting for machine to come up
	I0729 20:11:43.548162  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:43.548608  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:43.548638  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:43.548553  756503 retry.go:31] will retry after 646.831228ms: waiting for machine to come up
	I0729 20:11:44.197556  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:44.198085  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:44.198115  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:44.198015  756503 retry.go:31] will retry after 924.878532ms: waiting for machine to come up
	I0729 20:11:45.124713  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:45.125264  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:45.125305  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:45.125223  756503 retry.go:31] will retry after 1.391829943s: waiting for machine to come up
	I0729 20:11:46.518870  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:46.519370  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:46.519400  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:46.519312  756503 retry.go:31] will retry after 1.668556944s: waiting for machine to come up
	I0729 20:11:48.189217  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:48.189778  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:48.189805  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:48.189728  756503 retry.go:31] will retry after 1.865775967s: waiting for machine to come up
	I0729 20:11:50.057284  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:50.057789  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:50.057808  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:50.057754  756503 retry.go:31] will retry after 2.228840474s: waiting for machine to come up
	I0729 20:11:52.289080  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:52.289596  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:52.289622  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:52.289519  756503 retry.go:31] will retry after 3.476158421s: waiting for machine to come up
	I0729 20:11:55.767656  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:55.768243  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:55.768268  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:55.768197  756503 retry.go:31] will retry after 4.067263279s: waiting for machine to come up
	I0729 20:11:59.836951  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.837480  755599 main.go:141] libmachine: (ha-344518-m03) Found IP for machine: 192.168.39.53
	I0729 20:11:59.837505  755599 main.go:141] libmachine: (ha-344518-m03) Reserving static IP address...
	I0729 20:11:59.837518  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.837983  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find host DHCP lease matching {name: "ha-344518-m03", mac: "52:54:00:36:90:07", ip: "192.168.39.53"} in network mk-ha-344518
	I0729 20:11:59.915114  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Getting to WaitForSSH function...
	I0729 20:11:59.915149  755599 main.go:141] libmachine: (ha-344518-m03) Reserved static IP address: 192.168.39.53
	I0729 20:11:59.915185  755599 main.go:141] libmachine: (ha-344518-m03) Waiting for SSH to be available...
	I0729 20:11:59.917944  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.918593  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:90:07}
	I0729 20:11:59.918627  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.918811  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using SSH client type: external
	I0729 20:11:59.918842  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa (-rw-------)
	I0729 20:11:59.918875  755599 main.go:141] libmachine: (ha-344518-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:11:59.918890  755599 main.go:141] libmachine: (ha-344518-m03) DBG | About to run SSH command:
	I0729 20:11:59.918906  755599 main.go:141] libmachine: (ha-344518-m03) DBG | exit 0
	I0729 20:12:00.044086  755599 main.go:141] libmachine: (ha-344518-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 20:12:00.044430  755599 main.go:141] libmachine: (ha-344518-m03) KVM machine creation complete!
	I0729 20:12:00.044763  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:12:00.045479  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:00.045692  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:00.045866  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:12:00.045881  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:12:00.047074  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:12:00.047089  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:12:00.047099  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:12:00.047106  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.049675  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.050048  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.050074  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.050234  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.050423  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.050592  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.050738  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.050936  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.051156  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.051168  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:12:00.155934  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:12:00.155961  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:12:00.155971  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.159018  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.159465  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.159494  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.159639  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.159887  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.160084  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.160213  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.160432  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.160592  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.160602  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:12:00.268562  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:12:00.268629  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:12:00.268640  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:12:00.268651  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.268970  755599 buildroot.go:166] provisioning hostname "ha-344518-m03"
	I0729 20:12:00.269003  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.269244  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.272477  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.272897  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.272921  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.273217  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.273467  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.273665  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.273856  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.274079  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.274259  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.274271  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518-m03 && echo "ha-344518-m03" | sudo tee /etc/hostname
	I0729 20:12:00.395035  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518-m03
	
	I0729 20:12:00.395069  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.398127  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.398591  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.398617  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.398864  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.399074  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.399244  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.399446  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.399699  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.399930  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.399954  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:12:00.517438  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:12:00.517476  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:12:00.517500  755599 buildroot.go:174] setting up certificates
	I0729 20:12:00.517516  755599 provision.go:84] configureAuth start
	I0729 20:12:00.517529  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.517880  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:00.520617  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.521007  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.521038  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.521317  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.523530  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.523932  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.523960  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.524138  755599 provision.go:143] copyHostCerts
	I0729 20:12:00.524171  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:12:00.524202  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:12:00.524212  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:12:00.524280  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:12:00.524375  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:12:00.524393  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:12:00.524399  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:12:00.524424  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:12:00.524479  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:12:00.524495  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:12:00.524501  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:12:00.524522  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:12:00.524580  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518-m03 san=[127.0.0.1 192.168.39.53 ha-344518-m03 localhost minikube]
	I0729 20:12:01.019516  755599 provision.go:177] copyRemoteCerts
	I0729 20:12:01.019584  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:12:01.019617  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.022183  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.022497  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.022533  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.022753  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.022952  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.023130  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.023424  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.106028  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:12:01.106116  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:12:01.130953  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:12:01.131023  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 20:12:01.153630  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:12:01.153713  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:12:01.176800  755599 provision.go:87] duration metric: took 659.267754ms to configureAuth
	I0729 20:12:01.176831  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:12:01.177108  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:01.177212  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.180151  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.180649  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.180679  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.180828  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.181075  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.181365  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.181529  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.181711  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:01.181871  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:01.181884  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:12:01.454007  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:12:01.454049  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:12:01.454062  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetURL
	I0729 20:12:01.455471  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using libvirt version 6000000
	I0729 20:12:01.457700  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.458171  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.458204  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.458384  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:12:01.458404  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:12:01.458413  755599 client.go:171] duration metric: took 21.786723495s to LocalClient.Create
	I0729 20:12:01.458439  755599 start.go:167] duration metric: took 21.786794984s to libmachine.API.Create "ha-344518"
	I0729 20:12:01.458449  755599 start.go:293] postStartSetup for "ha-344518-m03" (driver="kvm2")
	I0729 20:12:01.458462  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:12:01.458491  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.458745  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:12:01.458774  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.460765  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.461118  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.461148  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.461270  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.461497  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.461665  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.461827  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.548457  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:12:01.552563  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:12:01.552589  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:12:01.552668  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:12:01.552739  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:12:01.552748  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:12:01.552826  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:12:01.561243  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:12:01.584677  755599 start.go:296] duration metric: took 126.208067ms for postStartSetup
	I0729 20:12:01.584759  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:12:01.585413  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:01.588230  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.588553  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.588582  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.588897  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:12:01.589171  755599 start.go:128] duration metric: took 21.939207595s to createHost
	I0729 20:12:01.589204  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.592831  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.593351  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.593378  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.593457  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.593662  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.593842  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.593979  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.594149  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:01.594313  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:01.594325  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:12:01.700211  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283921.676007561
	
	I0729 20:12:01.700232  755599 fix.go:216] guest clock: 1722283921.676007561
	I0729 20:12:01.700239  755599 fix.go:229] Guest: 2024-07-29 20:12:01.676007561 +0000 UTC Remote: 2024-07-29 20:12:01.589189696 +0000 UTC m=+175.394462204 (delta=86.817865ms)
	I0729 20:12:01.700255  755599 fix.go:200] guest clock delta is within tolerance: 86.817865ms
	I0729 20:12:01.700260  755599 start.go:83] releasing machines lock for "ha-344518-m03", held for 22.050452874s
	I0729 20:12:01.700277  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.700532  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:01.703365  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.703765  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.703796  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.706380  755599 out.go:177] * Found network options:
	I0729 20:12:01.707962  755599 out.go:177]   - NO_PROXY=192.168.39.238,192.168.39.104
	W0729 20:12:01.709275  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 20:12:01.709309  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:12:01.709323  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.709896  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.710112  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.710217  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:12:01.710262  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	W0729 20:12:01.710329  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 20:12:01.710353  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:12:01.710423  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:12:01.710441  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.713282  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713474  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713724  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.713752  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713913  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.713917  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.713938  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.714114  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.714125  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.714319  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.714344  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.714499  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.714491  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.714666  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.944094  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:12:01.950694  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:12:01.950769  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:12:01.967016  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:12:01.967044  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:12:01.967110  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:12:01.982528  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:12:01.995708  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:12:01.995780  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:12:02.009084  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:12:02.023369  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:12:02.128484  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:12:02.283662  755599 docker.go:232] disabling docker service ...
	I0729 20:12:02.283750  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:12:02.297503  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:12:02.309551  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:12:02.426139  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:12:02.556583  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:12:02.570797  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:12:02.589222  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:12:02.589290  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.599755  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:12:02.599838  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.610345  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.620910  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.631487  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:12:02.642693  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.653556  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.669084  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.679725  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:12:02.688942  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:12:02.689008  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:12:02.701106  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:12:02.710079  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:02.830153  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:12:02.953671  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:12:02.953750  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:12:02.958085  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:12:02.958158  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:12:02.961886  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:12:02.998893  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:12:02.998990  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:12:03.026129  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:12:03.055276  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:12:03.056720  755599 out.go:177]   - env NO_PROXY=192.168.39.238
	I0729 20:12:03.057990  755599 out.go:177]   - env NO_PROXY=192.168.39.238,192.168.39.104
	I0729 20:12:03.059160  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:03.062225  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:03.062566  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:03.062598  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:03.062814  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:12:03.066779  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:12:03.078768  755599 mustload.go:65] Loading cluster: ha-344518
	I0729 20:12:03.079042  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:03.079301  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:03.079345  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:03.094938  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0729 20:12:03.095433  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:03.095903  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:03.095925  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:03.096273  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:03.096497  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:12:03.098337  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:12:03.098699  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:03.098748  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:03.114982  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0729 20:12:03.115491  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:03.115971  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:03.115994  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:03.116337  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:03.116537  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:12:03.116690  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.53
	I0729 20:12:03.116702  755599 certs.go:194] generating shared ca certs ...
	I0729 20:12:03.116721  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.116856  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:12:03.116897  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:12:03.116906  755599 certs.go:256] generating profile certs ...
	I0729 20:12:03.116979  755599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:12:03.117008  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35
	I0729 20:12:03.117030  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.53 192.168.39.254]
	I0729 20:12:03.311360  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 ...
	I0729 20:12:03.311397  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35: {Name:mk1a78a099fd3736182aaf0edfadec7a0e984458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.311617  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35 ...
	I0729 20:12:03.311644  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35: {Name:mk5b422f05c9b8fee6cce59eb83e918019dbaa81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.311767  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:12:03.311904  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:12:03.312054  755599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:12:03.312075  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:12:03.312094  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:12:03.312110  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:12:03.312122  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:12:03.312135  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:12:03.312147  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:12:03.312160  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:12:03.312173  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:12:03.312231  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:12:03.312263  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:12:03.312272  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:12:03.312303  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:12:03.312326  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:12:03.312348  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:12:03.312387  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:12:03.312410  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.312422  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.312438  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.312474  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:12:03.316389  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:03.316879  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:12:03.316911  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:03.317134  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:12:03.317372  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:12:03.317550  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:12:03.317668  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:12:03.392377  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 20:12:03.398150  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 20:12:03.409180  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 20:12:03.413351  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 20:12:03.424798  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 20:12:03.429138  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 20:12:03.442321  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 20:12:03.446918  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 20:12:03.458005  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 20:12:03.462607  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 20:12:03.472735  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 20:12:03.477376  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 20:12:03.488496  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:12:03.513287  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:12:03.536285  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:12:03.558939  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:12:03.583477  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 20:12:03.606716  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:12:03.628873  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:12:03.652531  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:12:03.675313  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:12:03.698283  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:12:03.720354  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:12:03.742418  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 20:12:03.758844  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 20:12:03.774962  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 20:12:03.789867  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 20:12:03.805056  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 20:12:03.820390  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 20:12:03.835756  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 20:12:03.853933  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:12:03.859717  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:12:03.870979  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.875043  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.875111  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.880494  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:12:03.891853  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:12:03.902810  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.906894  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.906943  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.912187  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:12:03.923880  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:12:03.934179  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.938580  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.938645  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.943899  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:12:03.954088  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:12:03.957949  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:12:03.958016  755599 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.30.3 crio true true} ...
	I0729 20:12:03.958134  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:12:03.958167  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:12:03.958202  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:12:03.972405  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:12:03.972485  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:12:03.972576  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:12:03.982246  755599 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 20:12:03.982305  755599 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 20:12:03.990936  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 20:12:03.990949  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 20:12:03.990967  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:12:03.990974  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:12:03.990949  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 20:12:03.991042  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:12:03.991066  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:12:03.991160  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:12:04.008566  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:12:04.008583  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 20:12:04.008614  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 20:12:04.008675  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 20:12:04.008682  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:12:04.008712  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 20:12:04.029548  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 20:12:04.029585  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 20:12:04.841331  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 20:12:04.850847  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 20:12:04.866796  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:12:04.882035  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:12:04.897931  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:12:04.901677  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:12:04.912673  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:05.027888  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:12:05.044310  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:12:05.044843  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:05.044904  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:05.061266  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0729 20:12:05.061837  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:05.062673  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:05.062788  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:05.063225  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:05.064352  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:12:05.064806  755599 start.go:317] joinCluster: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:12:05.064955  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 20:12:05.064977  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:12:05.067982  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:05.068438  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:12:05.068466  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:05.068626  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:12:05.068827  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:12:05.068968  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:12:05.069120  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:12:05.239152  755599 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:12:05.239229  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6gvqoy.bmocsw69jkjfmihd --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0729 20:12:28.178733  755599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6gvqoy.bmocsw69jkjfmihd --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.939473724s)
	I0729 20:12:28.178774  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 20:12:28.627642  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518-m03 minikube.k8s.io/updated_at=2024_07_29T20_12_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=false
	I0729 20:12:28.753291  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344518-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 20:12:28.849098  755599 start.go:319] duration metric: took 23.784285616s to joinCluster
	I0729 20:12:28.849339  755599 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:12:28.849701  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:28.851134  755599 out.go:177] * Verifying Kubernetes components...
	I0729 20:12:28.852378  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:29.109238  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:12:29.193132  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:12:29.193507  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 20:12:29.193605  755599 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0729 20:12:29.193887  755599 node_ready.go:35] waiting up to 6m0s for node "ha-344518-m03" to be "Ready" ...
	I0729 20:12:29.194004  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:29.194015  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:29.194028  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:29.194036  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:29.198929  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:12:29.694081  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:29.694110  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:29.694123  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:29.694131  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:29.696969  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:30.195083  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:30.195105  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:30.195117  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:30.195122  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:30.198251  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:30.694221  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:30.694252  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:30.694264  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:30.694271  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:30.776976  755599 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0729 20:12:31.194384  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:31.194412  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:31.194424  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:31.194432  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:31.197437  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:31.197961  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:31.694342  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:31.694368  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:31.694377  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:31.694382  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:31.697493  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:32.194300  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:32.194330  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:32.194341  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:32.194348  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:32.197995  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:32.694861  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:32.694888  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:32.694900  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:32.694905  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:32.698277  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:33.195075  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:33.195103  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:33.195113  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:33.195118  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:33.198320  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:33.198991  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:33.694254  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:33.694293  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:33.694303  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:33.694307  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:33.697710  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:34.194794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:34.194827  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:34.194838  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:34.194842  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:34.198051  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:34.694460  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:34.694486  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:34.694499  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:34.694505  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:34.697707  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:35.195117  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:35.195143  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:35.195164  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:35.195171  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:35.198488  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:35.199067  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:35.694359  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:35.694388  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:35.694400  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:35.694404  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:35.697225  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:36.194764  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:36.194786  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:36.194795  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:36.194799  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:36.198201  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:36.694395  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:36.694417  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:36.694425  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:36.694431  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:36.705811  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:12:37.194827  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:37.194848  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:37.194857  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:37.194861  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:37.198311  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:37.694380  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:37.694403  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:37.694413  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:37.694416  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:37.697109  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:37.697646  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:38.194968  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:38.194992  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:38.195001  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:38.195005  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:38.198024  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:38.695188  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:38.695218  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:38.695229  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:38.695233  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:38.698390  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.195113  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:39.195136  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:39.195145  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:39.195156  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:39.199022  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.694388  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:39.694410  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:39.694419  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:39.694424  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:39.697664  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.698221  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:40.195074  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:40.195100  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:40.195112  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:40.195117  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:40.198365  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:40.694245  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:40.694291  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:40.694304  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:40.694310  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:40.697965  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.194682  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:41.194708  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:41.194719  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:41.194723  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:41.197877  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.694829  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:41.694853  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:41.694865  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:41.694870  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:41.698082  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.698573  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:42.195165  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:42.195194  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:42.195207  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:42.195214  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:42.198967  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:42.695014  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:42.695038  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:42.695047  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:42.695051  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:42.698089  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.194893  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:43.194918  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:43.194931  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:43.194939  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:43.198054  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.695187  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:43.695217  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:43.695230  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:43.695235  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:43.698780  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.699330  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:44.194691  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:44.194715  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:44.194724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:44.194728  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:44.198400  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:44.694955  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:44.694981  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:44.694994  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:44.694998  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:44.698241  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:45.194448  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:45.194472  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:45.194481  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:45.194485  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:45.197719  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:45.694173  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:45.694197  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:45.694206  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:45.694212  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:45.697817  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.194939  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:46.194962  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.194972  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.194979  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.198301  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.198876  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:46.694223  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:46.694243  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.694254  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.694259  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.698100  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.698587  755599 node_ready.go:49] node "ha-344518-m03" has status "Ready":"True"
	I0729 20:12:46.698607  755599 node_ready.go:38] duration metric: took 17.504700526s for node "ha-344518-m03" to be "Ready" ...
	I0729 20:12:46.698616  755599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:12:46.698692  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:46.698703  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.698714  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.698724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.707436  755599 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 20:12:46.713350  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.713431  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wzmc5
	I0729 20:12:46.713436  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.713443  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.713449  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.716071  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.716794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.716812  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.716819  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.716824  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.719004  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.719429  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.719447  755599 pod_ready.go:81] duration metric: took 6.075087ms for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.719455  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.719499  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xpkp6
	I0729 20:12:46.719507  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.719513  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.719518  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.722094  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.722639  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.722653  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.722662  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.722668  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.725126  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.725846  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.725871  755599 pod_ready.go:81] duration metric: took 6.410229ms for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.725879  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.725948  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518
	I0729 20:12:46.725959  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.725967  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.725970  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.728666  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.729395  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.729406  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.729414  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.729417  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.731496  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.731969  755599 pod_ready.go:92] pod "etcd-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.731987  755599 pod_ready.go:81] duration metric: took 6.102181ms for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.731996  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.732071  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m02
	I0729 20:12:46.732080  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.732087  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.732091  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.734223  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.734764  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:46.734781  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.734791  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.734798  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.737552  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.738176  755599 pod_ready.go:92] pod "etcd-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.738196  755599 pod_ready.go:81] duration metric: took 6.193814ms for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.738206  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.894576  755599 request.go:629] Waited for 156.307895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m03
	I0729 20:12:46.894653  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m03
	I0729 20:12:46.894659  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.894666  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.894673  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.898073  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.094541  755599 request.go:629] Waited for 195.902641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:47.094615  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:47.094623  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.094635  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.094645  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.097349  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:47.097912  755599 pod_ready.go:92] pod "etcd-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.097934  755599 pod_ready.go:81] duration metric: took 359.721312ms for pod "etcd-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.097954  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.295018  755599 request.go:629] Waited for 196.989648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:12:47.295078  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:12:47.295084  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.295091  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.295096  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.298841  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.495140  755599 request.go:629] Waited for 195.383242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:47.495272  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:47.495283  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.495294  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.495301  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.498928  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.499412  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.499432  755599 pod_ready.go:81] duration metric: took 401.471192ms for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.499443  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.694469  755599 request.go:629] Waited for 194.955371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:12:47.694572  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:12:47.694582  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.694593  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.694602  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.698239  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.894416  755599 request.go:629] Waited for 195.286523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:47.894487  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:47.894493  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.894501  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.894505  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.898022  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.898687  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.898709  755599 pod_ready.go:81] duration metric: took 399.260118ms for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.898722  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.094707  755599 request.go:629] Waited for 195.891774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m03
	I0729 20:12:48.094772  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m03
	I0729 20:12:48.094778  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.094786  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.094789  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.097772  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:48.295159  755599 request.go:629] Waited for 196.548603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:48.295223  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:48.295229  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.295236  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.295241  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.298595  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.299195  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:48.299221  755599 pod_ready.go:81] duration metric: took 400.493245ms for pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.299232  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.494447  755599 request.go:629] Waited for 195.021974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:12:48.494546  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:12:48.494558  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.494572  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.494589  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.497955  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.694851  755599 request.go:629] Waited for 196.266047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:48.694925  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:48.694932  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.694943  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.694951  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.698281  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.699030  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:48.699052  755599 pod_ready.go:81] duration metric: took 399.812722ms for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.699066  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.895071  755599 request.go:629] Waited for 195.895187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:12:48.895134  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:12:48.895139  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.895157  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.895167  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.898558  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.094513  755599 request.go:629] Waited for 195.267376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:49.094601  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:49.094609  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.094620  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.094629  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.100269  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:12:49.100756  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.100778  755599 pod_ready.go:81] duration metric: took 401.703428ms for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.100791  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.294926  755599 request.go:629] Waited for 194.024383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m03
	I0729 20:12:49.294991  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m03
	I0729 20:12:49.294997  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.295005  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.295011  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.298168  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.495264  755599 request.go:629] Waited for 196.358066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:49.495331  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:49.495337  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.495347  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.495355  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.498359  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:49.498925  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.498947  755599 pod_ready.go:81] duration metric: took 398.149039ms for pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.498957  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.694452  755599 request.go:629] Waited for 195.421058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:12:49.694520  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:12:49.694525  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.694532  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.694536  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.697950  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.895039  755599 request.go:629] Waited for 196.366117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:49.895109  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:49.895115  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.895122  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.895126  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.898150  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.898751  755599 pod_ready.go:92] pod "kube-proxy-fh6rg" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.898771  755599 pod_ready.go:81] duration metric: took 399.807911ms for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.898780  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.095225  755599 request.go:629] Waited for 196.35648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:12:50.095292  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:12:50.095298  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.095305  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.095310  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.098510  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.294674  755599 request.go:629] Waited for 195.360527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:50.294771  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:50.294780  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.294791  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.294797  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.297738  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:50.298210  755599 pod_ready.go:92] pod "kube-proxy-nfxp2" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:50.298232  755599 pod_ready.go:81] duration metric: took 399.446317ms for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.298242  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8wn5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.494281  755599 request.go:629] Waited for 195.962731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8wn5
	I0729 20:12:50.494378  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8wn5
	I0729 20:12:50.494388  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.494395  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.494404  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.497661  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.694739  755599 request.go:629] Waited for 196.392215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:50.694845  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:50.694852  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.694860  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.694866  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.698157  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.698721  755599 pod_ready.go:92] pod "kube-proxy-s8wn5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:50.698744  755599 pod_ready.go:81] duration metric: took 400.496066ms for pod "kube-proxy-s8wn5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.698754  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.894780  755599 request.go:629] Waited for 195.954883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:12:50.894868  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:12:50.894874  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.894882  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.894886  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.898020  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.094575  755599 request.go:629] Waited for 196.002294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:51.094670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:51.094676  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.094685  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.094691  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.098002  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.098475  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.098493  755599 pod_ready.go:81] duration metric: took 399.73378ms for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.098503  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.295276  755599 request.go:629] Waited for 196.695226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:12:51.295371  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:12:51.295377  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.295386  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.295398  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.298463  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.494604  755599 request.go:629] Waited for 195.512534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:51.494660  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:51.494668  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.494678  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.494685  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.497553  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:51.498039  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.498062  755599 pod_ready.go:81] duration metric: took 399.552682ms for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.498072  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.695101  755599 request.go:629] Waited for 196.945766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m03
	I0729 20:12:51.695189  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m03
	I0729 20:12:51.695196  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.695208  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.695212  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.698528  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.894595  755599 request.go:629] Waited for 195.391784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:51.894670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:51.894678  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.894689  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.894695  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.897830  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.898422  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.898444  755599 pod_ready.go:81] duration metric: took 400.364758ms for pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.898456  755599 pod_ready.go:38] duration metric: took 5.199830746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:12:51.898476  755599 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:12:51.898542  755599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:12:51.912905  755599 api_server.go:72] duration metric: took 23.063467882s to wait for apiserver process to appear ...
	I0729 20:12:51.912930  755599 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:12:51.912955  755599 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0729 20:12:51.917598  755599 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0729 20:12:51.917694  755599 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0729 20:12:51.917705  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.917718  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.917723  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.918594  755599 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 20:12:51.918833  755599 api_server.go:141] control plane version: v1.30.3
	I0729 20:12:51.918857  755599 api_server.go:131] duration metric: took 5.918903ms to wait for apiserver health ...
	I0729 20:12:51.918866  755599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:12:52.095130  755599 request.go:629] Waited for 176.179213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.095216  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.095221  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.095229  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.095236  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.101815  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:12:52.108555  755599 system_pods.go:59] 24 kube-system pods found
	I0729 20:12:52.108591  755599 system_pods.go:61] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:12:52.108598  755599 system_pods.go:61] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:12:52.108603  755599 system_pods.go:61] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:12:52.108608  755599 system_pods.go:61] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:12:52.108613  755599 system_pods.go:61] "etcd-ha-344518-m03" [1e322c16-d9d5-4bf8-99b1-de5db95a3965] Running
	I0729 20:12:52.108618  755599 system_pods.go:61] "kindnet-6qbz5" [cc428fce-2821-412d-b483-782bc277c4f7] Running
	I0729 20:12:52.108624  755599 system_pods.go:61] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:12:52.108634  755599 system_pods.go:61] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:12:52.108639  755599 system_pods.go:61] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:12:52.108645  755599 system_pods.go:61] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:12:52.108651  755599 system_pods.go:61] "kube-apiserver-ha-344518-m03" [4c708671-9ded-4b8e-80e4-58182a79597d] Running
	I0729 20:12:52.108658  755599 system_pods.go:61] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:12:52.108666  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:12:52.108672  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m03" [9a23ca85-bda2-4023-b05d-b3c0ceba1e67] Running
	I0729 20:12:52.108677  755599 system_pods.go:61] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:12:52.108683  755599 system_pods.go:61] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:12:52.108691  755599 system_pods.go:61] "kube-proxy-s8wn5" [cd1b4894-f7bf-4249-a6d8-c89bbe6e2ab7] Running
	I0729 20:12:52.108697  755599 system_pods.go:61] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:12:52.108704  755599 system_pods.go:61] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:12:52.108710  755599 system_pods.go:61] "kube-scheduler-ha-344518-m03" [500b3aea-f25e-4aae-84d6-b261db07b35a] Running
	I0729 20:12:52.108716  755599 system_pods.go:61] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:12:52.108722  755599 system_pods.go:61] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:12:52.108728  755599 system_pods.go:61] "kube-vip-ha-344518-m03" [45610f87-5e2d-46c3-8f8f-ba77b685fd86] Running
	I0729 20:12:52.108733  755599 system_pods.go:61] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:12:52.108743  755599 system_pods.go:74] duration metric: took 189.869611ms to wait for pod list to return data ...
	I0729 20:12:52.108756  755599 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:12:52.295212  755599 request.go:629] Waited for 186.362246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:12:52.295338  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:12:52.295350  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.295362  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.295372  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.298650  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:52.298814  755599 default_sa.go:45] found service account: "default"
	I0729 20:12:52.298835  755599 default_sa.go:55] duration metric: took 190.069659ms for default service account to be created ...
	I0729 20:12:52.298846  755599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 20:12:52.494241  755599 request.go:629] Waited for 195.307096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.494345  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.494353  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.494363  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.494371  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.508285  755599 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 20:12:52.516937  755599 system_pods.go:86] 24 kube-system pods found
	I0729 20:12:52.516968  755599 system_pods.go:89] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:12:52.516974  755599 system_pods.go:89] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:12:52.516978  755599 system_pods.go:89] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:12:52.516983  755599 system_pods.go:89] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:12:52.516986  755599 system_pods.go:89] "etcd-ha-344518-m03" [1e322c16-d9d5-4bf8-99b1-de5db95a3965] Running
	I0729 20:12:52.516990  755599 system_pods.go:89] "kindnet-6qbz5" [cc428fce-2821-412d-b483-782bc277c4f7] Running
	I0729 20:12:52.516994  755599 system_pods.go:89] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:12:52.516998  755599 system_pods.go:89] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:12:52.517001  755599 system_pods.go:89] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:12:52.517006  755599 system_pods.go:89] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:12:52.517010  755599 system_pods.go:89] "kube-apiserver-ha-344518-m03" [4c708671-9ded-4b8e-80e4-58182a79597d] Running
	I0729 20:12:52.517014  755599 system_pods.go:89] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:12:52.517018  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:12:52.517022  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m03" [9a23ca85-bda2-4023-b05d-b3c0ceba1e67] Running
	I0729 20:12:52.517026  755599 system_pods.go:89] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:12:52.517030  755599 system_pods.go:89] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:12:52.517033  755599 system_pods.go:89] "kube-proxy-s8wn5" [cd1b4894-f7bf-4249-a6d8-c89bbe6e2ab7] Running
	I0729 20:12:52.517037  755599 system_pods.go:89] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:12:52.517041  755599 system_pods.go:89] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:12:52.517045  755599 system_pods.go:89] "kube-scheduler-ha-344518-m03" [500b3aea-f25e-4aae-84d6-b261db07b35a] Running
	I0729 20:12:52.517049  755599 system_pods.go:89] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:12:52.517052  755599 system_pods.go:89] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:12:52.517057  755599 system_pods.go:89] "kube-vip-ha-344518-m03" [45610f87-5e2d-46c3-8f8f-ba77b685fd86] Running
	I0729 20:12:52.517061  755599 system_pods.go:89] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:12:52.517068  755599 system_pods.go:126] duration metric: took 218.213547ms to wait for k8s-apps to be running ...
	I0729 20:12:52.517075  755599 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 20:12:52.517123  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:12:52.530943  755599 system_svc.go:56] duration metric: took 13.856488ms WaitForService to wait for kubelet
	I0729 20:12:52.530976  755599 kubeadm.go:582] duration metric: took 23.681542554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:12:52.530998  755599 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:12:52.694327  755599 request.go:629] Waited for 163.250579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0729 20:12:52.694419  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0729 20:12:52.694426  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.694438  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.694447  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.699196  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:12:52.700897  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700926  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700940  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700945  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700951  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700956  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700960  755599 node_conditions.go:105] duration metric: took 169.957801ms to run NodePressure ...
	I0729 20:12:52.700974  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:12:52.701000  755599 start.go:255] writing updated cluster config ...
	I0729 20:12:52.701369  755599 ssh_runner.go:195] Run: rm -f paused
	I0729 20:12:52.753542  755599 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 20:12:52.756602  755599 out.go:177] * Done! kubectl is now configured to use "ha-344518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.384715178Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284191384692682,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82353932-1648-4e9d-bb48-910101c06340 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.385139797Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=24f2230c-ce62-427b-b1d7-13fa2e5be26a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.385283036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=24f2230c-ce62-427b-b1d7-13fa2e5be26a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.385568967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=24f2230c-ce62-427b-b1d7-13fa2e5be26a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.422064364Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4b071f4a-bdc4-4c21-af34-1f2671335000 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.422184278Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4b071f4a-bdc4-4c21-af34-1f2671335000 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.423157267Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b8c5173-dfea-4bb2-8922-22b233c280a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.423627404Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284191423606781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b8c5173-dfea-4bb2-8922-22b233c280a1 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.424102108Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d5b5079-b031-4e03-b49c-368c2bf8d679 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.424152436Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d5b5079-b031-4e03-b49c-368c2bf8d679 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.424419958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d5b5079-b031-4e03-b49c-368c2bf8d679 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.458708098Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c603ca45-404b-42a3-a95a-9420b1eca22e name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.458788767Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c603ca45-404b-42a3-a95a-9420b1eca22e name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.462639866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=62002a87-ea53-402f-b92f-21661267251d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.463453650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284191463401785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=62002a87-ea53-402f-b92f-21661267251d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.464135841Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6be6f31-f18d-4412-87c4-9473a1df2fe3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.464265895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6be6f31-f18d-4412-87c4-9473a1df2fe3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.464610688Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6be6f31-f18d-4412-87c4-9473a1df2fe3 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.502489013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c32b30eb-c9d7-4639-897c-af8f7826a7c6 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.502601713Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c32b30eb-c9d7-4639-897c-af8f7826a7c6 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.503774281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b6a4520e-d7b6-4151-a3fa-2eeb251894d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.504353268Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284191504319474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b6a4520e-d7b6-4151-a3fa-2eeb251894d8 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.504780101Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa10ecc5-5f6c-4314-a52c-857b7271d06f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.504844613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa10ecc5-5f6c-4314-a52c-857b7271d06f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:16:31 ha-344518 crio[679]: time="2024-07-29 20:16:31.505224718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa10ecc5-5f6c-4314-a52c-857b7271d06f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	962f37271e54d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   4fd5554044288       busybox-fc5497c4f-fp24v
	7bed7bb792810       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   e6598d2da30cd       coredns-7db6d8ff4d-xpkp6
	150057459b685       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   f573eda859720       storage-provisioner
	4d27dc2036f3c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      6 minutes ago       Running             coredns                   0                   ffb2234aef191       coredns-7db6d8ff4d-wzmc5
	594577e4d332f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    6 minutes ago       Running             kindnet-cni               0                   aa3121e476fc2       kindnet-nl4kz
	d79e4f49251f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      6 minutes ago       Running             kube-proxy                0                   08408a18bb915       kube-proxy-fh6rg
	a5bf9f11f4034       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     6 minutes ago       Running             kube-vip                  0                   b4ddbe2050711       kube-vip-ha-344518
	1121b90510c21       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      6 minutes ago       Running             kube-scheduler            0                   b61bed291d877       kube-scheduler-ha-344518
	d1cab255995a7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      6 minutes ago       Running             kube-apiserver            0                   be0b5e7879a7b       kube-apiserver-ha-344518
	3e957bb1c15cb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      6 minutes ago       Running             kube-controller-manager   0                   a370dcc0d3fed       kube-controller-manager-ha-344518
	a0e14d313861e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      6 minutes ago       Running             etcd                      0                   259cc56efacfd       etcd-ha-344518
	
	
	==> coredns [4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a] <==
	[INFO] 10.244.0.4:37485 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000059285s
	[INFO] 10.244.1.2:48771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251747s
	[INFO] 10.244.1.2:44435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001414617s
	[INFO] 10.244.2.2:38735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112057s
	[INFO] 10.244.2.2:35340 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003153904s
	[INFO] 10.244.2.2:54596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140336s
	[INFO] 10.244.0.4:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949954s
	[INFO] 10.244.0.4:39933 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113699s
	[INFO] 10.244.0.4:54725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150049s
	[INFO] 10.244.1.2:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115875s
	[INFO] 10.244.1.2:54023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742745s
	[INFO] 10.244.1.2:51538 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140285s
	[INFO] 10.244.1.2:56008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088578s
	[INFO] 10.244.2.2:44895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095319s
	[INFO] 10.244.2.2:40784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167082s
	[INFO] 10.244.0.4:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120067s
	[INFO] 10.244.0.4:39840 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111609s
	[INFO] 10.244.0.4:38416 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058031s
	[INFO] 10.244.1.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176608s
	[INFO] 10.244.2.2:48597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139446s
	[INFO] 10.244.2.2:51477 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106731s
	[INFO] 10.244.0.4:47399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109762s
	[INFO] 10.244.0.4:48496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126806s
	[INFO] 10.244.1.2:33090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183559s
	[INFO] 10.244.1.2:58207 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095513s
	
	
	==> coredns [7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c] <==
	[INFO] 10.244.2.2:45817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025205s
	[INFO] 10.244.2.2:60259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158897s
	[INFO] 10.244.2.2:59354 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146719s
	[INFO] 10.244.2.2:40109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117861s
	[INFO] 10.244.0.4:43889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020394s
	[INFO] 10.244.0.4:34685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072181s
	[INFO] 10.244.0.4:59825 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335615s
	[INFO] 10.244.0.4:51461 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176686s
	[INFO] 10.244.0.4:35140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051586s
	[INFO] 10.244.1.2:54871 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115274s
	[INFO] 10.244.1.2:51590 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001521426s
	[INFO] 10.244.1.2:60677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011059s
	[INFO] 10.244.1.2:48005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106929s
	[INFO] 10.244.2.2:58992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110446s
	[INFO] 10.244.2.2:41728 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108732s
	[INFO] 10.244.0.4:38164 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104442s
	[INFO] 10.244.1.2:47258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118558s
	[INFO] 10.244.1.2:38089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092315s
	[INFO] 10.244.1.2:33841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075348s
	[INFO] 10.244.2.2:33549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013334s
	[INFO] 10.244.2.2:53967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203235s
	[INFO] 10.244.0.4:37211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128698s
	[INFO] 10.244.0.4:50842 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112886s
	[INFO] 10.244.1.2:51560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281444s
	[INFO] 10.244.1.2:48121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072064s
	
	
	==> describe nodes <==
	Name:               ha-344518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:16:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-344518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58926cc84a1545f2aed136a3e761f2be
	  System UUID:                58926cc8-4a15-45f2-aed1-36a3e761f2be
	  Boot ID:                    53511801-74aa-43cb-9108-0a1fffab4f32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fp24v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 coredns-7db6d8ff4d-wzmc5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m30s
	  kube-system                 coredns-7db6d8ff4d-xpkp6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m30s
	  kube-system                 etcd-ha-344518                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m44s
	  kube-system                 kindnet-nl4kz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m30s
	  kube-system                 kube-apiserver-ha-344518             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 kube-controller-manager-ha-344518    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 kube-proxy-fh6rg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-scheduler-ha-344518             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 kube-vip-ha-344518                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m29s  kube-proxy       
	  Normal  Starting                 6m44s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m44s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m44s  kubelet          Node ha-344518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m44s  kubelet          Node ha-344518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m44s  kubelet          Node ha-344518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m31s  node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal  NodeReady                6m14s  kubelet          Node ha-344518 status is now: NodeReady
	  Normal  RegisteredNode           4m58s  node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal  RegisteredNode           3m48s  node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	
	
	Name:               ha-344518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:11:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:14:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-344518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e624f7b4f7644519a6f4690f28614c0
	  System UUID:                9e624f7b-4f76-4451-9a6f-4690f28614c0
	  Boot ID:                    e119378b-e8db-4356-9172-068b6b98830d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xn8rr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-344518-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m10s
	  kube-system                 kindnet-jj2b4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m16s
	  kube-system                 kube-apiserver-ha-344518-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m14s
	  kube-system                 kube-controller-manager-ha-344518-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m7s
	  kube-system                 kube-proxy-nfxp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-scheduler-ha-344518-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m8s
	  kube-system                 kube-vip-ha-344518-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m12s                  kube-proxy       
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m16s (x8 over 5m16s)  kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeNotReady             103s                   node-controller  Node ha-344518-m02 status is now: NodeNotReady
	
	
	Name:               ha-344518-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_12_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:16:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-344518-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 41330caf582148fd80914bd6e0732453
	  System UUID:                41330caf-5821-48fd-8091-4bd6e0732453
	  Boot ID:                    2135b6f7-7490-484b-8671-5d7e83df96c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22rcc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 etcd-ha-344518-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m4s
	  kube-system                 kindnet-6qbz5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-apiserver-ha-344518-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-controller-manager-ha-344518-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-proxy-s8wn5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	  kube-system                 kube-scheduler-ha-344518-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-vip-ha-344518-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node ha-344518-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal  RegisteredNode           3m48s                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	
	
	Name:               ha-344518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_13_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:13:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:16:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-344518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a26135ecab4ebcafa4c947c9d6f013
	  System UUID:                d8a26135-ecab-4ebc-afa4-c947c9d6f013
	  Boot ID:                    245dfa10-a723-4afd-9297-c2f80c37bd37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4m6xw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m1s
	  kube-system                 kube-proxy-947zc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  NodeHasSufficientMemory  3m1s (x2 over 3m1s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m1s (x2 over 3m1s)  kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m1s (x2 over 3m1s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  RegisteredNode           2m58s                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  NodeReady                2m41s                kubelet          Node ha-344518-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 20:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050285] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036102] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.678115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.781096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.549111] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.281405] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.054666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050707] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.158935] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.126079] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.245623] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.820743] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.869843] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.068841] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.242210] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.084855] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 20:10] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.358609] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 20:11] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50] <==
	{"level":"warn","ts":"2024-07-29T20:16:31.663304Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.763393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.767772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.77645Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.780106Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.796571Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.805258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.811908Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.816482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.819429Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.826671Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.833289Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.839838Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.843287Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.84672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.856494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.862692Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.862891Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.868704Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.869675Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.872378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.876009Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.882977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.889999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:16:31.896291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:16:31 up 7 min,  0 users,  load average: 0.16, 0.20, 0.11
	Linux ha-344518 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f] <==
	I0729 20:15:57.004450       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:16:06.996144       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:16:06.996386       1 main.go:299] handling current node
	I0729 20:16:06.996478       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:16:06.996507       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:16:06.996842       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:16:06.996910       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:16:06.997051       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:16:06.997098       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:16:16.996685       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:16:16.996837       1 main.go:299] handling current node
	I0729 20:16:16.996875       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:16:16.996949       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:16:16.997121       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:16:16.997242       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:16:16.997413       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:16:16.997453       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:16:26.998667       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:16:26.998742       1 main.go:299] handling current node
	I0729 20:16:26.998770       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:16:26.998777       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:16:26.998946       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:16:26.998963       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:16:26.999049       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:16:26.999075       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7] <==
	I0729 20:09:47.704347       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 20:09:47.719304       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 20:09:47.730068       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 20:10:01.225707       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 20:10:01.336350       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 20:12:58.853300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54572: use of closed network connection
	E0729 20:12:59.045359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54594: use of closed network connection
	E0729 20:12:59.243152       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54616: use of closed network connection
	E0729 20:12:59.420160       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54646: use of closed network connection
	E0729 20:12:59.607976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54662: use of closed network connection
	E0729 20:12:59.793614       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54688: use of closed network connection
	E0729 20:12:59.975057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	E0729 20:13:00.147484       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54736: use of closed network connection
	E0729 20:13:00.628852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54780: use of closed network connection
	E0729 20:13:00.800905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0729 20:13:00.976476       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54822: use of closed network connection
	E0729 20:13:01.153866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54848: use of closed network connection
	E0729 20:13:01.332956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0729 20:13:01.519905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54882: use of closed network connection
	I0729 20:13:32.787350       1 trace.go:236] Trace[1732495531]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2730872b-4fc5-4dad-9025-244522ad211d,client:192.168.39.70,api-group:,api-version:v1,name:kindnet,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 20:13:32.148) (total time: 638ms):
	Trace[1732495531]: ---"watchCache locked acquired" 636ms (20:13:32.784)
	Trace[1732495531]: [638.590252ms] [638.590252ms] END
	I0729 20:13:32.945993       1 trace.go:236] Trace[1011466498]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8c30973c-dc0f-460a-aab1-8468700473ee,client:192.168.39.70,api-group:,api-version:v1,name:kube-proxy-zwtzc,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-zwtzc,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:DELETE (29-Jul-2024 20:13:32.129) (total time: 815ms):
	Trace[1011466498]: ---"Object deleted from database" 525ms (20:13:32.945)
	Trace[1011466498]: [815.977524ms] [815.977524ms] END
	
	
	==> kube-controller-manager [3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00] <==
	I0729 20:12:25.735929       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344518-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:12:30.571336       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344518-m03"
	I0729 20:12:53.686251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.61757ms"
	I0729 20:12:53.787072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.728085ms"
	I0729 20:12:53.909984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.430787ms"
	I0729 20:12:53.936261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.078542ms"
	I0729 20:12:53.936450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.308µs"
	I0729 20:12:54.008040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.967339ms"
	I0729 20:12:54.008156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.547µs"
	I0729 20:12:55.373149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.524µs"
	I0729 20:12:55.662573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.19µs"
	I0729 20:12:56.787645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.464186ms"
	I0729 20:12:56.787756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.972µs"
	I0729 20:12:57.969346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.718317ms"
	I0729 20:12:57.970264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.528µs"
	I0729 20:12:58.438280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.631889ms"
	I0729 20:12:58.438474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.744µs"
	E0729 20:13:30.037081       1 certificate_controller.go:146] Sync csr-xvt5c failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-xvt5c": the object has been modified; please apply your changes to the latest version and try again
	I0729 20:13:30.312902       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-344518-m04\" does not exist"
	I0729 20:13:30.349782       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344518-m04" podCIDRs=["10.244.3.0/24"]
	I0729 20:13:30.583559       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344518-m04"
	I0729 20:13:50.809435       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344518-m04"
	I0729 20:14:48.871711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344518-m04"
	I0729 20:14:49.083520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.785544ms"
	I0729 20:14:49.083670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.226µs"
	
	
	==> kube-proxy [d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454] <==
	I0729 20:10:02.484332       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:10:02.506903       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0729 20:10:02.566932       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:10:02.567033       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:10:02.567075       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:10:02.570607       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:10:02.570991       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:10:02.571273       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:10:02.574335       1 config.go:192] "Starting service config controller"
	I0729 20:10:02.574750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:10:02.574896       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:10:02.574926       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:10:02.579386       1 config.go:319] "Starting node config controller"
	I0729 20:10:02.579463       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:10:02.675994       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:10:02.676221       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:10:02.680431       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be] <==
	E0729 20:09:45.292279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 20:09:45.309301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:09:45.309396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:09:45.371422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 20:09:45.371525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 20:09:45.509150       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:09:45.509278       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:09:45.542301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 20:09:45.542419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 20:09:45.551642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 20:09:45.553621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 20:09:45.646656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:09:45.647288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 20:09:45.665246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:09:45.665351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 20:09:48.152261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 20:12:53.607480       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="33523618-d46c-4dc1-9aa3-c3f217c7903f" pod="default/busybox-fc5497c4f-xn8rr" assumedNode="ha-344518-m02" currentNode="ha-344518-m03"
	E0729 20:12:53.637136       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xn8rr\": pod busybox-fc5497c4f-xn8rr is already assigned to node \"ha-344518-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xn8rr" node="ha-344518-m03"
	E0729 20:12:53.637339       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 33523618-d46c-4dc1-9aa3-c3f217c7903f(default/busybox-fc5497c4f-xn8rr) was assumed on ha-344518-m03 but assigned to ha-344518-m02" pod="default/busybox-fc5497c4f-xn8rr"
	E0729 20:12:53.638078       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xn8rr\": pod busybox-fc5497c4f-xn8rr is already assigned to node \"ha-344518-m02\"" pod="default/busybox-fc5497c4f-xn8rr"
	I0729 20:12:53.641620       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-xn8rr" node="ha-344518-m02"
	E0729 20:12:53.671517       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-22rcc\": pod busybox-fc5497c4f-22rcc is already assigned to node \"ha-344518-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-22rcc" node="ha-344518-m03"
	E0729 20:12:53.671567       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 89bd9cd8-932d-4941-bd9f-ecf2f6f90c07(default/busybox-fc5497c4f-22rcc) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-22rcc"
	E0729 20:12:53.671623       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-22rcc\": pod busybox-fc5497c4f-22rcc is already assigned to node \"ha-344518-m03\"" pod="default/busybox-fc5497c4f-22rcc"
	I0729 20:12:53.671656       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-22rcc" node="ha-344518-m03"
	
	
	==> kubelet <==
	Jul 29 20:12:47 ha-344518 kubelet[1384]: E0729 20:12:47.711950    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:12:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:12:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:12:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:12:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.664371    1384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=172.664276647 podStartE2EDuration="2m52.664276647s" podCreationTimestamp="2024-07-29 20:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 20:10:18.872756869 +0000 UTC m=+31.378717754" watchObservedRunningTime="2024-07-29 20:12:53.664276647 +0000 UTC m=+186.170237534"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.665327    1384 topology_manager.go:215] "Topology Admit Handler" podUID="34dba935-70e7-453a-996e-56c88c2e27ab" podNamespace="default" podName="busybox-fc5497c4f-fp24v"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.667873    1384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v678\" (UniqueName: \"kubernetes.io/projected/34dba935-70e7-453a-996e-56c88c2e27ab-kube-api-access-2v678\") pod \"busybox-fc5497c4f-fp24v\" (UID: \"34dba935-70e7-453a-996e-56c88c2e27ab\") " pod="default/busybox-fc5497c4f-fp24v"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: W0729 20:12:53.676080    1384 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-344518" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-344518' and this object
	Jul 29 20:12:53 ha-344518 kubelet[1384]: E0729 20:12:53.676252    1384 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-344518" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-344518' and this object
	Jul 29 20:13:47 ha-344518 kubelet[1384]: E0729 20:13:47.709538    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:13:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:14:47 ha-344518 kubelet[1384]: E0729 20:14:47.709167    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:14:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:15:47 ha-344518 kubelet[1384]: E0729 20:15:47.709788    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:15:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344518 -n ha-344518
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (3.201045511s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:16:36.431058  760540 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:16:36.431299  760540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:36.431307  760540 out.go:304] Setting ErrFile to fd 2...
	I0729 20:16:36.431313  760540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:36.431504  760540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:16:36.431661  760540 out.go:298] Setting JSON to false
	I0729 20:16:36.431686  760540 mustload.go:65] Loading cluster: ha-344518
	I0729 20:16:36.431754  760540 notify.go:220] Checking for updates...
	I0729 20:16:36.432169  760540 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:16:36.432190  760540 status.go:255] checking status of ha-344518 ...
	I0729 20:16:36.432551  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.432653  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.447905  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44991
	I0729 20:16:36.448413  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.449070  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.449112  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.449466  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.449669  760540 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:16:36.451199  760540 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:16:36.451218  760540 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:36.451663  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.451729  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.466988  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34287
	I0729 20:16:36.467483  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.467924  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.467948  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.468271  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.468442  760540 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:16:36.471243  760540 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:36.471694  760540 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:36.471732  760540 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:36.471795  760540 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:36.472214  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.472259  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.487745  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0729 20:16:36.488388  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.488835  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.488855  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.489162  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.489382  760540 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:16:36.489621  760540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:36.489660  760540 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:16:36.492370  760540 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:36.492811  760540 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:36.492844  760540 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:36.493006  760540 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:16:36.493182  760540 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:16:36.493321  760540 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:16:36.493589  760540 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:16:36.575173  760540 ssh_runner.go:195] Run: systemctl --version
	I0729 20:16:36.582493  760540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:36.598978  760540 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:36.599013  760540 api_server.go:166] Checking apiserver status ...
	I0729 20:16:36.599051  760540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:36.613397  760540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:16:36.622387  760540 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:36.622440  760540 ssh_runner.go:195] Run: ls
	I0729 20:16:36.626513  760540 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:36.630764  760540 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:36.630791  760540 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:16:36.630805  760540 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:36.630834  760540 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:16:36.631133  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.631175  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.646303  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0729 20:16:36.646723  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.647208  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.647229  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.647540  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.647743  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:16:36.649532  760540 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:16:36.649552  760540 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:36.649964  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.650007  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.664915  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42781
	I0729 20:16:36.665371  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.665825  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.665847  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.666186  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.666357  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:16:36.669268  760540 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:36.669737  760540 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:36.669763  760540 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:36.669897  760540 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:36.670191  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:36.670229  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:36.685652  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46775
	I0729 20:16:36.686046  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:36.686504  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:36.686526  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:36.686858  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:36.687056  760540 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:16:36.687254  760540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:36.687276  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:16:36.689896  760540 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:36.690336  760540 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:36.690377  760540 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:36.690515  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:16:36.690686  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:16:36.690837  760540 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:16:36.691015  760540 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:16:39.236314  760540 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:39.236452  760540 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:16:39.236477  760540 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:39.236489  760540 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:16:39.236516  760540 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:39.236540  760540 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:16:39.236892  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.236945  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.252702  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46857
	I0729 20:16:39.253260  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.253832  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.253863  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.254209  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.254421  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:16:39.256362  760540 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:16:39.256388  760540 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:39.256718  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.256762  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.271559  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32803
	I0729 20:16:39.272069  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.272552  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.272574  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.272873  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.273099  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:16:39.276222  760540 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:39.276678  760540 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:39.276698  760540 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:39.276923  760540 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:39.277262  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.277304  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.294476  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37607
	I0729 20:16:39.294964  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.295526  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.295557  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.295904  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.296163  760540 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:16:39.296364  760540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:39.296387  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:16:39.299464  760540 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:39.300090  760540 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:39.300114  760540 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:39.300299  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:16:39.300483  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:16:39.300667  760540 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:16:39.300789  760540 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:16:39.383321  760540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:39.397611  760540 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:39.397654  760540 api_server.go:166] Checking apiserver status ...
	I0729 20:16:39.397694  760540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:39.411415  760540 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:16:39.420832  760540 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:39.420905  760540 ssh_runner.go:195] Run: ls
	I0729 20:16:39.425612  760540 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:39.429757  760540 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:39.429780  760540 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:16:39.429789  760540 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:39.429803  760540 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:16:39.430081  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.430115  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.445584  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36077
	I0729 20:16:39.446004  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.446475  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.446497  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.446826  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.447066  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:16:39.448771  760540 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:16:39.448790  760540 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:39.449068  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.449101  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.464065  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33253
	I0729 20:16:39.464551  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.465030  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.465052  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.465360  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.465556  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:16:39.468087  760540 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:39.468491  760540 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:39.468522  760540 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:39.468685  760540 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:39.469075  760540 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:39.469112  760540 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:39.484939  760540 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35111
	I0729 20:16:39.485385  760540 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:39.485825  760540 main.go:141] libmachine: Using API Version  1
	I0729 20:16:39.485849  760540 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:39.486163  760540 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:39.486382  760540 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:16:39.486586  760540 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:39.486610  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:16:39.489632  760540 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:39.490029  760540 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:39.490066  760540 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:39.490194  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:16:39.490361  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:16:39.490574  760540 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:16:39.490728  760540 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:16:39.574842  760540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:39.588575  760540 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (4.843543935s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:16:41.097203  760641 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:16:41.097344  760641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:41.097357  760641 out.go:304] Setting ErrFile to fd 2...
	I0729 20:16:41.097364  760641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:41.097556  760641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:16:41.097775  760641 out.go:298] Setting JSON to false
	I0729 20:16:41.097811  760641 mustload.go:65] Loading cluster: ha-344518
	I0729 20:16:41.097867  760641 notify.go:220] Checking for updates...
	I0729 20:16:41.098295  760641 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:16:41.098316  760641 status.go:255] checking status of ha-344518 ...
	I0729 20:16:41.098729  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.098797  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.114759  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35357
	I0729 20:16:41.115269  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.116156  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.116230  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.116676  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.116968  760641 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:16:41.118779  760641 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:16:41.118799  760641 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:41.119099  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.119140  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.134301  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43969
	I0729 20:16:41.134788  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.135258  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.135279  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.135595  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.135807  760641 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:16:41.138437  760641 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:41.138852  760641 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:41.138891  760641 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:41.139029  760641 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:41.139340  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.139376  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.155370  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36149
	I0729 20:16:41.155784  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.156328  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.156354  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.156699  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.156910  760641 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:16:41.157094  760641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:41.157124  760641 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:16:41.159719  760641 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:41.160092  760641 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:41.160115  760641 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:41.160250  760641 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:16:41.160490  760641 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:16:41.160699  760641 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:16:41.160824  760641 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:16:41.239446  760641 ssh_runner.go:195] Run: systemctl --version
	I0729 20:16:41.245750  760641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:41.260573  760641 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:41.260604  760641 api_server.go:166] Checking apiserver status ...
	I0729 20:16:41.260638  760641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:41.273703  760641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:16:41.288946  760641 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:41.289039  760641 ssh_runner.go:195] Run: ls
	I0729 20:16:41.295021  760641 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:41.299259  760641 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:41.299286  760641 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:16:41.299297  760641 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:41.299313  760641 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:16:41.299623  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.299654  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.315027  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36211
	I0729 20:16:41.315543  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.316010  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.316054  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.316402  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.316637  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:16:41.318344  760641 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:16:41.318360  760641 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:41.318691  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.318730  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.335119  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
	I0729 20:16:41.335643  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.336199  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.336236  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.336593  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.336862  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:16:41.339636  760641 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:41.340089  760641 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:41.340129  760641 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:41.340310  760641 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:41.340665  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:41.340714  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:41.356471  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42239
	I0729 20:16:41.356937  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:41.357400  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:41.357424  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:41.357709  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:41.357886  760641 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:16:41.358046  760641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:41.358066  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:16:41.361108  760641 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:41.361544  760641 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:41.361565  760641 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:41.361760  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:16:41.361917  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:16:41.362192  760641 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:16:41.362360  760641 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:16:42.308265  760641 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:42.308361  760641 retry.go:31] will retry after 155.253362ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:45.540316  760641 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:45.540402  760641 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:16:45.540419  760641 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:45.540428  760641 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:16:45.540460  760641 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:45.540471  760641 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:16:45.540783  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.540827  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.558483  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0729 20:16:45.558956  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.559478  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.559502  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.559849  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.560075  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:16:45.561767  760641 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:16:45.561783  760641 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:45.562078  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.562113  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.576651  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45773
	I0729 20:16:45.577103  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.577673  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.577701  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.578038  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.578242  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:16:45.581305  760641 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:45.581756  760641 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:45.581783  760641 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:45.581945  760641 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:45.582268  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.582340  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.597316  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45545
	I0729 20:16:45.597755  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.598326  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.598348  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.598688  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.598890  760641 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:16:45.599079  760641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:45.599100  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:16:45.602148  760641 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:45.602672  760641 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:45.602696  760641 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:45.602818  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:16:45.602973  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:16:45.603120  760641 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:16:45.603240  760641 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:16:45.682796  760641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:45.697988  760641 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:45.698020  760641 api_server.go:166] Checking apiserver status ...
	I0729 20:16:45.698054  760641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:45.710657  760641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:16:45.725449  760641 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:45.725503  760641 ssh_runner.go:195] Run: ls
	I0729 20:16:45.729268  760641 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:45.733943  760641 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:45.733965  760641 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:16:45.733974  760641 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:45.733988  760641 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:16:45.734277  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.734317  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.749947  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35595
	I0729 20:16:45.750431  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.750943  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.750964  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.751324  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.751532  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:16:45.753263  760641 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:16:45.753283  760641 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:45.753641  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.753681  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.769556  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46325
	I0729 20:16:45.769977  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.770478  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.770498  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.770848  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.771061  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:16:45.773702  760641 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:45.774046  760641 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:45.774086  760641 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:45.774166  760641 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:45.774534  760641 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:45.774581  760641 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:45.793013  760641 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0729 20:16:45.793515  760641 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:45.794096  760641 main.go:141] libmachine: Using API Version  1
	I0729 20:16:45.794125  760641 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:45.794489  760641 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:45.794710  760641 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:16:45.794926  760641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:45.794947  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:16:45.797881  760641 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:45.798258  760641 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:45.798290  760641 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:45.798419  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:16:45.798601  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:16:45.798743  760641 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:16:45.798837  760641 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:16:45.879508  760641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:45.894730  760641 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (5.101023018s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:16:46.977652  760741 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:16:46.977918  760741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:46.977930  760741 out.go:304] Setting ErrFile to fd 2...
	I0729 20:16:46.977935  760741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:46.978186  760741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:16:46.978397  760741 out.go:298] Setting JSON to false
	I0729 20:16:46.978425  760741 mustload.go:65] Loading cluster: ha-344518
	I0729 20:16:46.978460  760741 notify.go:220] Checking for updates...
	I0729 20:16:46.978840  760741 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:16:46.978858  760741 status.go:255] checking status of ha-344518 ...
	I0729 20:16:46.979272  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:46.979325  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:46.997651  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46821
	I0729 20:16:46.998084  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:46.998703  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:46.998726  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:46.999158  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:46.999409  760741 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:16:47.001232  760741 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:16:47.001252  760741 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:47.001591  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:47.001663  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:47.017251  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I0729 20:16:47.017786  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:47.018825  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:47.018871  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:47.019269  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:47.019484  760741 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:16:47.022287  760741 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:47.022728  760741 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:47.022755  760741 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:47.022900  760741 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:47.023246  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:47.023293  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:47.039564  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46693
	I0729 20:16:47.039979  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:47.040533  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:47.040564  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:47.040964  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:47.041189  760741 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:16:47.041418  760741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:47.041451  760741 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:16:47.044130  760741 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:47.044503  760741 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:47.044529  760741 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:47.044685  760741 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:16:47.044855  760741 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:16:47.045015  760741 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:16:47.045158  760741 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:16:47.131568  760741 ssh_runner.go:195] Run: systemctl --version
	I0729 20:16:47.137677  760741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:47.152904  760741 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:47.152940  760741 api_server.go:166] Checking apiserver status ...
	I0729 20:16:47.152982  760741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:47.167858  760741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:16:47.176843  760741 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:47.176893  760741 ssh_runner.go:195] Run: ls
	I0729 20:16:47.181040  760741 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:47.185297  760741 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:47.185324  760741 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:16:47.185334  760741 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:47.185360  760741 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:16:47.185660  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:47.185698  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:47.202035  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39835
	I0729 20:16:47.202567  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:47.203029  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:47.203050  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:47.203422  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:47.203631  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:16:47.205166  760741 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:16:47.205188  760741 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:47.205534  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:47.205576  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:47.220844  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36665
	I0729 20:16:47.221336  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:47.221799  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:47.221819  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:47.222115  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:47.222290  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:16:47.225568  760741 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:47.226176  760741 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:47.226199  760741 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:47.226341  760741 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:47.226733  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:47.226774  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:47.242362  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45361
	I0729 20:16:47.242813  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:47.243325  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:47.243347  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:47.243655  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:47.243853  760741 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:16:47.244092  760741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:47.244115  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:16:47.246479  760741 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:47.246905  760741 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:47.246948  760741 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:47.247020  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:16:47.247297  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:16:47.247496  760741 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:16:47.247638  760741 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:16:48.612347  760741 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:48.612401  760741 retry.go:31] will retry after 278.305488ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:51.684329  760741 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:51.684448  760741 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:16:51.684469  760741 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:51.684477  760741 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:16:51.684506  760741 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:51.684513  760741 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:16:51.684837  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.684881  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.701262  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44753
	I0729 20:16:51.701732  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.702261  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.702301  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.702640  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.702867  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:16:51.704454  760741 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:16:51.704470  760741 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:51.704778  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.704815  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.719697  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33185
	I0729 20:16:51.720159  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.720621  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.720644  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.720968  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.721148  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:16:51.723720  760741 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:51.724206  760741 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:51.724242  760741 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:51.724401  760741 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:51.724743  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.724784  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.741362  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41517
	I0729 20:16:51.741796  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.742268  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.742288  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.742615  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.742803  760741 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:16:51.743055  760741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:51.743090  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:16:51.745680  760741 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:51.746070  760741 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:51.746093  760741 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:51.746227  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:16:51.746430  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:16:51.746579  760741 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:16:51.746716  760741 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:16:51.827668  760741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:51.844859  760741 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:51.844890  760741 api_server.go:166] Checking apiserver status ...
	I0729 20:16:51.844929  760741 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:51.858436  760741 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:16:51.869239  760741 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:51.869308  760741 ssh_runner.go:195] Run: ls
	I0729 20:16:51.873441  760741 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:51.879542  760741 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:51.879567  760741 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:16:51.879579  760741 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:51.879602  760741 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:16:51.879976  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.880050  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.895114  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34481
	I0729 20:16:51.895636  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.896187  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.896210  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.896565  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.896800  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:16:51.898358  760741 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:16:51.898373  760741 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:51.898709  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.898754  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.913608  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39109
	I0729 20:16:51.913971  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.914458  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.914478  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.914805  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.915008  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:16:51.917837  760741 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:51.918176  760741 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:51.918199  760741 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:51.918366  760741 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:51.918662  760741 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:51.918704  760741 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:51.935403  760741 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45279
	I0729 20:16:51.935813  760741 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:51.936439  760741 main.go:141] libmachine: Using API Version  1
	I0729 20:16:51.936466  760741 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:51.936861  760741 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:51.937090  760741 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:16:51.937289  760741 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:51.937316  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:16:51.940181  760741 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:51.940657  760741 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:51.940687  760741 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:51.940813  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:16:51.941009  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:16:51.941154  760741 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:16:51.941301  760741 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:16:52.019102  760741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:52.033433  760741 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (4.961955516s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:16:53.516189  760842 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:16:53.516312  760842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:53.516322  760842 out.go:304] Setting ErrFile to fd 2...
	I0729 20:16:53.516329  760842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:16:53.516506  760842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:16:53.516710  760842 out.go:298] Setting JSON to false
	I0729 20:16:53.516745  760842 mustload.go:65] Loading cluster: ha-344518
	I0729 20:16:53.516883  760842 notify.go:220] Checking for updates...
	I0729 20:16:53.517197  760842 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:16:53.517216  760842 status.go:255] checking status of ha-344518 ...
	I0729 20:16:53.517624  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.517698  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.535682  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38291
	I0729 20:16:53.536134  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.536697  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.536718  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.537077  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.537308  760842 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:16:53.539136  760842 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:16:53.539156  760842 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:53.539485  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.539523  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.554479  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40183
	I0729 20:16:53.554916  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.555356  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.555384  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.555709  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.555906  760842 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:16:53.558743  760842 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:53.559165  760842 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:53.559191  760842 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:53.559377  760842 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:16:53.559686  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.559729  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.574888  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0729 20:16:53.575403  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.575831  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.575848  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.576183  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.576387  760842 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:16:53.576582  760842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:53.576619  760842 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:16:53.579595  760842 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:53.580061  760842 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:16:53.580087  760842 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:16:53.580217  760842 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:16:53.580420  760842 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:16:53.580576  760842 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:16:53.580782  760842 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:16:53.658910  760842 ssh_runner.go:195] Run: systemctl --version
	I0729 20:16:53.665091  760842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:53.683142  760842 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:53.683172  760842 api_server.go:166] Checking apiserver status ...
	I0729 20:16:53.683213  760842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:53.698570  760842 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:16:53.707998  760842 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:53.708078  760842 ssh_runner.go:195] Run: ls
	I0729 20:16:53.712062  760842 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:53.716741  760842 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:53.716766  760842 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:16:53.716778  760842 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:53.716794  760842 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:16:53.717108  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.717155  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.733321  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0729 20:16:53.733798  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.734277  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.734299  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.734626  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.734841  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:16:53.736523  760842 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:16:53.736540  760842 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:53.736846  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.736886  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.752526  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42277
	I0729 20:16:53.752978  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.753428  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.753450  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.753831  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.754014  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:16:53.756741  760842 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:53.757196  760842 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:53.757221  760842 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:53.757348  760842 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:16:53.757644  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:53.757676  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:53.772219  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43577
	I0729 20:16:53.772686  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:53.773142  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:53.773161  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:53.773515  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:53.773688  760842 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:16:53.773850  760842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:53.773874  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:16:53.776758  760842 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:53.777192  760842 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:16:53.777228  760842 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:16:53.777413  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:16:53.777596  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:16:53.777759  760842 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:16:53.777898  760842 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:16:54.756366  760842 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:54.756448  760842 retry.go:31] will retry after 264.944297ms: dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:58.084293  760842 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:16:58.084418  760842 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:16:58.084440  760842 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:58.084448  760842 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:16:58.084477  760842 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:16:58.084485  760842 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:16:58.084883  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.084938  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.102003  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46583
	I0729 20:16:58.102483  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.103050  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.103074  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.103450  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.103710  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:16:58.105400  760842 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:16:58.105422  760842 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:58.105715  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.105748  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.121264  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33247
	I0729 20:16:58.121705  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.122208  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.122229  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.122571  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.122783  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:16:58.125722  760842 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:58.126144  760842 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:58.126171  760842 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:58.126301  760842 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:16:58.126640  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.126687  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.141446  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40861
	I0729 20:16:58.141859  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.142362  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.142391  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.142749  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.143040  760842 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:16:58.143259  760842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:58.143277  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:16:58.146240  760842 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:58.146687  760842 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:16:58.146727  760842 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:16:58.146862  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:16:58.147036  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:16:58.147201  760842 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:16:58.147368  760842 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:16:58.228224  760842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:58.243103  760842 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:16:58.243138  760842 api_server.go:166] Checking apiserver status ...
	I0729 20:16:58.243185  760842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:16:58.258719  760842 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:16:58.269298  760842 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:16:58.269345  760842 ssh_runner.go:195] Run: ls
	I0729 20:16:58.273711  760842 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:16:58.279780  760842 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:16:58.279803  760842 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:16:58.279814  760842 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:16:58.279835  760842 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:16:58.280175  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.280221  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.295417  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40617
	I0729 20:16:58.295891  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.296358  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.296385  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.296728  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.296938  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:16:58.298505  760842 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:16:58.298521  760842 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:58.298811  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.298843  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.314540  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38135
	I0729 20:16:58.314975  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.315426  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.315446  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.315811  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.316014  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:16:58.318671  760842 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:58.319078  760842 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:58.319111  760842 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:58.319238  760842 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:16:58.319544  760842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:16:58.319582  760842 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:16:58.334054  760842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35143
	I0729 20:16:58.334451  760842 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:16:58.334920  760842 main.go:141] libmachine: Using API Version  1
	I0729 20:16:58.334938  760842 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:16:58.335227  760842 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:16:58.335410  760842 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:16:58.335616  760842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:16:58.335640  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:16:58.338266  760842 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:58.338747  760842 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:16:58.338786  760842 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:16:58.338915  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:16:58.339098  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:16:58.339266  760842 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:16:58.339412  760842 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:16:58.419413  760842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:16:58.434545  760842 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (3.74574907s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:17:01.178844  760964 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:17:01.178956  760964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:01.178965  760964 out.go:304] Setting ErrFile to fd 2...
	I0729 20:17:01.178970  760964 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:01.179159  760964 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:17:01.179341  760964 out.go:298] Setting JSON to false
	I0729 20:17:01.179370  760964 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:01.179474  760964 notify.go:220] Checking for updates...
	I0729 20:17:01.179735  760964 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:01.179755  760964 status.go:255] checking status of ha-344518 ...
	I0729 20:17:01.180166  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.180230  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.198870  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37455
	I0729 20:17:01.199332  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.199933  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.199961  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.200368  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.200621  760964 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:17:01.202395  760964 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:17:01.202415  760964 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:01.202787  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.202827  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.218366  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37699
	I0729 20:17:01.218827  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.219387  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.219408  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.219740  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.219913  760964 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:17:01.223145  760964 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:01.223574  760964 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:01.223625  760964 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:01.223713  760964 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:01.224002  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.224053  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.239301  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I0729 20:17:01.239698  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.240225  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.240246  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.240588  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.240795  760964 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:17:01.241011  760964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:01.241045  760964 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:17:01.243970  760964 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:01.244497  760964 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:01.244549  760964 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:01.244791  760964 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:17:01.244968  760964 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:17:01.245107  760964 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:17:01.245214  760964 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:17:01.323549  760964 ssh_runner.go:195] Run: systemctl --version
	I0729 20:17:01.330170  760964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:01.343908  760964 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:01.343944  760964 api_server.go:166] Checking apiserver status ...
	I0729 20:17:01.343994  760964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:01.362566  760964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:17:01.371869  760964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:01.371927  760964 ssh_runner.go:195] Run: ls
	I0729 20:17:01.375722  760964 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:01.380165  760964 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:01.380191  760964 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:17:01.380205  760964 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:01.380229  760964 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:17:01.380530  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.380570  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.396276  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36793
	I0729 20:17:01.396743  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.397259  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.397287  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.397632  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.397840  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:17:01.399466  760964 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:17:01.399483  760964 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:17:01.399830  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.399872  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.415600  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41989
	I0729 20:17:01.416064  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.416638  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.416670  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.416992  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.417182  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:17:01.420211  760964 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:01.420661  760964 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:17:01.420692  760964 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:01.420836  760964 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:17:01.421192  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:01.421236  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:01.435930  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0729 20:17:01.436450  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:01.436895  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:01.436915  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:01.437238  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:01.437386  760964 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:17:01.437596  760964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:01.437618  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:17:01.440000  760964 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:01.440460  760964 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:17:01.440490  760964 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:01.440638  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:17:01.440817  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:17:01.440983  760964 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:17:01.441114  760964 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:17:04.516380  760964 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:17:04.516509  760964 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:17:04.516558  760964 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:17:04.516569  760964 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:17:04.516590  760964 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:17:04.516604  760964 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:17:04.516976  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.517027  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.532755  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0729 20:17:04.533344  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.533934  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.533960  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.534345  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.534561  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:17:04.536281  760964 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:17:04.536306  760964 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:04.536606  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.536643  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.554377  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35287
	I0729 20:17:04.554974  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.555490  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.555513  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.555912  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.556187  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:17:04.559378  760964 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:04.559831  760964 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:04.559847  760964 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:04.560013  760964 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:04.560342  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.560386  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.575863  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37841
	I0729 20:17:04.576365  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.576902  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.576927  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.577252  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.577450  760964 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:17:04.577624  760964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:04.577641  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:17:04.580313  760964 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:04.580804  760964 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:04.580830  760964 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:04.581010  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:17:04.581178  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:17:04.581332  760964 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:17:04.581444  760964 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:17:04.667239  760964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:04.682103  760964 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:04.682132  760964 api_server.go:166] Checking apiserver status ...
	I0729 20:17:04.682177  760964 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:04.696051  760964 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:17:04.705122  760964 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:04.705181  760964 ssh_runner.go:195] Run: ls
	I0729 20:17:04.709078  760964 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:04.715046  760964 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:04.715076  760964 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:17:04.715087  760964 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:04.715103  760964 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:17:04.715513  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.715558  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.731613  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37771
	I0729 20:17:04.731998  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.732505  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.732526  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.732865  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.733115  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:04.734726  760964 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:17:04.734741  760964 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:04.735040  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.735090  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.750733  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45537
	I0729 20:17:04.751150  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.751752  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.751773  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.752107  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.752371  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:17:04.755184  760964 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:04.755715  760964 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:04.755752  760964 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:04.755904  760964 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:04.756270  760964 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:04.756311  760964 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:04.771069  760964 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I0729 20:17:04.771515  760964 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:04.771979  760964 main.go:141] libmachine: Using API Version  1
	I0729 20:17:04.772000  760964 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:04.772323  760964 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:04.772534  760964 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:17:04.772730  760964 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:04.772752  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:17:04.775390  760964 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:04.775818  760964 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:04.775841  760964 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:04.775998  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:17:04.776191  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:17:04.776375  760964 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:17:04.776501  760964 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:17:04.854764  760964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:04.869102  760964 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (3.710533985s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:17:10.705663  761081 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:17:10.705812  761081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:10.705821  761081 out.go:304] Setting ErrFile to fd 2...
	I0729 20:17:10.705834  761081 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:10.706036  761081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:17:10.706197  761081 out.go:298] Setting JSON to false
	I0729 20:17:10.706223  761081 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:10.706279  761081 notify.go:220] Checking for updates...
	I0729 20:17:10.706613  761081 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:10.706628  761081 status.go:255] checking status of ha-344518 ...
	I0729 20:17:10.707014  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.707071  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.725091  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38501
	I0729 20:17:10.725573  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.726235  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.726284  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.726672  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.726941  761081 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:17:10.728656  761081 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:17:10.728674  761081 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:10.729004  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.729057  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.745458  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0729 20:17:10.745906  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.746336  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.746358  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.746837  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.747065  761081 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:17:10.750141  761081 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:10.750747  761081 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:10.750778  761081 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:10.750957  761081 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:10.751248  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.751296  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.766682  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34685
	I0729 20:17:10.767129  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.767590  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.767614  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.767995  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.768267  761081 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:17:10.768503  761081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:10.768543  761081 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:17:10.771548  761081 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:10.772082  761081 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:10.772101  761081 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:10.772298  761081 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:17:10.772469  761081 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:17:10.772604  761081 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:17:10.772722  761081 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:17:10.851193  761081 ssh_runner.go:195] Run: systemctl --version
	I0729 20:17:10.857983  761081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:10.872980  761081 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:10.873019  761081 api_server.go:166] Checking apiserver status ...
	I0729 20:17:10.873070  761081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:10.886660  761081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:17:10.896859  761081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:10.896936  761081 ssh_runner.go:195] Run: ls
	I0729 20:17:10.901863  761081 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:10.906220  761081 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:10.906244  761081 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:17:10.906257  761081 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:10.906285  761081 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:17:10.906594  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.906637  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.922878  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0729 20:17:10.923334  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.923974  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.924010  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.924360  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.924574  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:17:10.925964  761081 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:17:10.925979  761081 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:17:10.926263  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.926312  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.942539  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38955
	I0729 20:17:10.942996  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.943480  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.943502  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.943818  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.944073  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:17:10.947059  761081 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:10.947489  761081 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:17:10.947515  761081 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:10.947648  761081 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:17:10.947936  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:10.947971  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:10.963090  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
	I0729 20:17:10.963570  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:10.964070  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:10.964094  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:10.964410  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:10.964609  761081 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:17:10.964864  761081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:10.964895  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:17:10.967619  761081 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:10.968008  761081 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:17:10.968058  761081 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:17:10.968188  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:17:10.968382  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:17:10.968553  761081 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:17:10.968697  761081 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	W0729 20:17:14.024309  761081 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.104:22: connect: no route to host
	W0729 20:17:14.024409  761081 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	E0729 20:17:14.024429  761081 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:17:14.024438  761081 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:17:14.024459  761081 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.104:22: connect: no route to host
	I0729 20:17:14.024468  761081 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:17:14.024841  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.024889  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.040451  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34153
	I0729 20:17:14.040955  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.041542  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.041582  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.041979  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.042248  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:17:14.044196  761081 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:17:14.044216  761081 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:14.044574  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.044609  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.060833  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33305
	I0729 20:17:14.061319  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.061818  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.061841  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.062191  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.062413  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:17:14.065839  761081 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:14.066296  761081 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:14.066336  761081 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:14.066440  761081 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:14.066752  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.066786  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.082417  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33845
	I0729 20:17:14.082899  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.083382  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.083407  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.083766  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.083939  761081 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:17:14.084145  761081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:14.084166  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:17:14.086937  761081 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:14.087444  761081 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:14.087467  761081 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:14.087653  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:17:14.087842  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:17:14.088045  761081 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:17:14.088224  761081 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:17:14.170925  761081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:14.184331  761081 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:14.184369  761081 api_server.go:166] Checking apiserver status ...
	I0729 20:17:14.184413  761081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:14.198353  761081 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:17:14.206673  761081 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:14.206741  761081 ssh_runner.go:195] Run: ls
	I0729 20:17:14.210369  761081 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:14.215839  761081 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:14.215862  761081 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:17:14.215871  761081 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:14.215893  761081 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:17:14.216230  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.216268  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.231969  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0729 20:17:14.232488  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.232930  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.232952  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.233268  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.233448  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:14.235285  761081 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:17:14.235306  761081 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:14.235618  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.235658  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.250331  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42191
	I0729 20:17:14.250814  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.251337  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.251362  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.251674  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.251869  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:17:14.254610  761081 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:14.255033  761081 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:14.255060  761081 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:14.255235  761081 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:14.255537  761081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:14.255580  761081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:14.270299  761081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43797
	I0729 20:17:14.270722  761081 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:14.271166  761081 main.go:141] libmachine: Using API Version  1
	I0729 20:17:14.271187  761081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:14.271518  761081 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:14.271758  761081 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:17:14.271985  761081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:14.272007  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:17:14.274776  761081 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:14.275209  761081 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:14.275241  761081 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:14.275459  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:17:14.275637  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:17:14.275788  761081 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:17:14.275964  761081 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:17:14.355051  761081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:14.369739  761081 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 7 (621.4828ms)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:17:22.176589  761216 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:17:22.176739  761216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:22.176750  761216 out.go:304] Setting ErrFile to fd 2...
	I0729 20:17:22.176757  761216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:22.176973  761216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:17:22.177166  761216 out.go:298] Setting JSON to false
	I0729 20:17:22.177198  761216 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:22.177258  761216 notify.go:220] Checking for updates...
	I0729 20:17:22.177647  761216 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:22.177669  761216 status.go:255] checking status of ha-344518 ...
	I0729 20:17:22.178058  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.178131  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.195921  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33521
	I0729 20:17:22.196504  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.197167  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.197196  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.197539  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.197732  761216 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:17:22.199515  761216 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:17:22.199544  761216 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:22.199833  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.199878  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.216327  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
	I0729 20:17:22.216788  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.217233  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.217253  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.217594  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.217793  761216 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:17:22.220982  761216 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:22.221473  761216 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:22.221508  761216 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:22.221724  761216 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:22.222044  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.222089  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.238788  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37973
	I0729 20:17:22.239183  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.239621  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.239646  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.239998  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.240211  761216 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:17:22.240457  761216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:22.240489  761216 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:17:22.243091  761216 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:22.243437  761216 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:22.243467  761216 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:22.243638  761216 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:17:22.243914  761216 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:17:22.244106  761216 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:17:22.244347  761216 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:17:22.323671  761216 ssh_runner.go:195] Run: systemctl --version
	I0729 20:17:22.329958  761216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:22.345334  761216 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:22.345367  761216 api_server.go:166] Checking apiserver status ...
	I0729 20:17:22.345402  761216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:22.361118  761216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:17:22.372050  761216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:22.372117  761216 ssh_runner.go:195] Run: ls
	I0729 20:17:22.376237  761216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:22.380585  761216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:22.380619  761216 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:17:22.380629  761216 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:22.380645  761216 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:17:22.380924  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.380965  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.396684  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37467
	I0729 20:17:22.397229  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.397821  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.397851  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.398206  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.398409  761216 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:17:22.400102  761216 status.go:330] ha-344518-m02 host status = "Stopped" (err=<nil>)
	I0729 20:17:22.400121  761216 status.go:343] host is not running, skipping remaining checks
	I0729 20:17:22.400145  761216 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:22.400165  761216 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:17:22.400461  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.400497  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.415590  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35005
	I0729 20:17:22.416078  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.416511  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.416536  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.416863  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.417037  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:17:22.418666  761216 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:17:22.418683  761216 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:22.418968  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.419000  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.438253  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0729 20:17:22.438848  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.439477  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.439501  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.439813  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.440063  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:17:22.443181  761216 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:22.443636  761216 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:22.443667  761216 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:22.443790  761216 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:22.444218  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.444265  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.459965  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39371
	I0729 20:17:22.460570  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.461177  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.461206  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.461580  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.461786  761216 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:17:22.462014  761216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:22.462042  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:17:22.465508  761216 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:22.466018  761216 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:22.466047  761216 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:22.466311  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:17:22.466513  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:17:22.466696  761216 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:17:22.466837  761216 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:17:22.551447  761216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:22.565986  761216 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:22.566017  761216 api_server.go:166] Checking apiserver status ...
	I0729 20:17:22.566065  761216 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:22.579534  761216 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:17:22.588356  761216 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:22.588409  761216 ssh_runner.go:195] Run: ls
	I0729 20:17:22.592392  761216 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:22.596855  761216 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:22.596879  761216 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:17:22.596887  761216 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:22.596903  761216 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:17:22.597284  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.597341  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.612621  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38705
	I0729 20:17:22.613195  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.613708  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.613737  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.614060  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.614251  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:22.615807  761216 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:17:22.615824  761216 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:22.616126  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.616180  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.631368  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34815
	I0729 20:17:22.631805  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.632340  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.632361  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.632712  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.632916  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:17:22.636042  761216 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:22.636562  761216 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:22.636604  761216 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:22.636772  761216 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:22.637091  761216 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:22.637129  761216 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:22.652254  761216 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40221
	I0729 20:17:22.652756  761216 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:22.653274  761216 main.go:141] libmachine: Using API Version  1
	I0729 20:17:22.653307  761216 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:22.653680  761216 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:22.653895  761216 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:17:22.654110  761216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:22.654130  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:17:22.656890  761216 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:22.657375  761216 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:22.657398  761216 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:22.657565  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:17:22.657738  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:17:22.657873  761216 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:17:22.658013  761216 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:17:22.738873  761216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:22.753718  761216 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 7 (616.383507ms)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-344518-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:17:34.302848  761321 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:17:34.302981  761321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:34.302993  761321 out.go:304] Setting ErrFile to fd 2...
	I0729 20:17:34.302998  761321 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:34.303224  761321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:17:34.303400  761321 out.go:298] Setting JSON to false
	I0729 20:17:34.303434  761321 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:34.303492  761321 notify.go:220] Checking for updates...
	I0729 20:17:34.303825  761321 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:34.303843  761321 status.go:255] checking status of ha-344518 ...
	I0729 20:17:34.304328  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.304388  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.319699  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0729 20:17:34.320205  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.320819  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.320839  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.321237  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.321474  761321 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:17:34.323174  761321 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:17:34.323189  761321 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:34.323495  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.323527  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.338833  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38995
	I0729 20:17:34.339307  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.339786  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.339808  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.340113  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.340294  761321 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:17:34.343220  761321 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:34.343800  761321 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:34.343831  761321 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:34.343953  761321 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:17:34.344369  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.344430  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.359253  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46249
	I0729 20:17:34.359651  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.360106  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.360129  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.360414  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.360560  761321 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:17:34.360708  761321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:34.360738  761321 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:17:34.363554  761321 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:34.363965  761321 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:17:34.363998  761321 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:17:34.364126  761321 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:17:34.364314  761321 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:17:34.364477  761321 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:17:34.364653  761321 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:17:34.448702  761321 ssh_runner.go:195] Run: systemctl --version
	I0729 20:17:34.454445  761321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:34.468739  761321 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:34.468773  761321 api_server.go:166] Checking apiserver status ...
	I0729 20:17:34.468806  761321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:34.483499  761321 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup
	W0729 20:17:34.493790  761321 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1179/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:34.493851  761321 ssh_runner.go:195] Run: ls
	I0729 20:17:34.498243  761321 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:34.502692  761321 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:34.502713  761321 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:17:34.502724  761321 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:34.502739  761321 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:17:34.503029  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.503063  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.520014  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44435
	I0729 20:17:34.520480  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.520988  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.521009  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.521352  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.521574  761321 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:17:34.523095  761321 status.go:330] ha-344518-m02 host status = "Stopped" (err=<nil>)
	I0729 20:17:34.523110  761321 status.go:343] host is not running, skipping remaining checks
	I0729 20:17:34.523118  761321 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:34.523148  761321 status.go:255] checking status of ha-344518-m03 ...
	I0729 20:17:34.523542  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.523589  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.539435  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39349
	I0729 20:17:34.539919  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.540424  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.540440  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.540740  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.540909  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:17:34.542356  761321 status.go:330] ha-344518-m03 host status = "Running" (err=<nil>)
	I0729 20:17:34.542372  761321 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:34.542667  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.542711  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.558779  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45305
	I0729 20:17:34.559208  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.559709  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.559730  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.560092  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.560328  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:17:34.563542  761321 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:34.563994  761321 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:34.564022  761321 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:34.564274  761321 host.go:66] Checking if "ha-344518-m03" exists ...
	I0729 20:17:34.564744  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.564796  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.582370  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0729 20:17:34.582833  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.583372  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.583400  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.583750  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.583947  761321 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:17:34.584163  761321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:34.584190  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:17:34.586856  761321 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:34.587243  761321 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:34.587292  761321 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:34.587417  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:17:34.587595  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:17:34.587760  761321 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:17:34.587935  761321 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:17:34.671896  761321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:34.687704  761321 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:17:34.687745  761321 api_server.go:166] Checking apiserver status ...
	I0729 20:17:34.687796  761321 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:17:34.701799  761321 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup
	W0729 20:17:34.710717  761321 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1609/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:17:34.710778  761321 ssh_runner.go:195] Run: ls
	I0729 20:17:34.715124  761321 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:17:34.720793  761321 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:17:34.720816  761321 status.go:422] ha-344518-m03 apiserver status = Running (err=<nil>)
	I0729 20:17:34.720824  761321 status.go:257] ha-344518-m03 status: &{Name:ha-344518-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:17:34.720840  761321 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:17:34.721125  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.721158  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.736439  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I0729 20:17:34.736867  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.737323  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.737345  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.737675  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.737846  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:34.739459  761321 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:17:34.739479  761321 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:34.739841  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.739886  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.754647  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34561
	I0729 20:17:34.755022  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.755537  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.755562  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.755945  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.756166  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:17:34.758923  761321 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:34.759353  761321 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:34.759381  761321 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:34.759533  761321 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:17:34.759827  761321 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:34.759861  761321 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:34.774415  761321 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46663
	I0729 20:17:34.774850  761321 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:34.775310  761321 main.go:141] libmachine: Using API Version  1
	I0729 20:17:34.775339  761321 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:34.775659  761321 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:34.775870  761321 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:17:34.776014  761321 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:17:34.776047  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:17:34.778634  761321 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:34.779091  761321 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:34.779128  761321 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:34.779287  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:17:34.779457  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:17:34.779616  761321 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:17:34.779772  761321 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:17:34.858930  761321 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:17:34.873598  761321 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344518 -n ha-344518
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344518 logs -n 25: (1.349543551s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m03_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m04 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp testdata/cp-test.txt                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m04_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03:/home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m03 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-344518 node stop m02 -v=7                                                     | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-344518 node start m02 -v=7                                                    | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:09:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:09:06.231628  755599 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:09:06.231745  755599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:09:06.231753  755599 out.go:304] Setting ErrFile to fd 2...
	I0729 20:09:06.231757  755599 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:09:06.231921  755599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:09:06.232515  755599 out.go:298] Setting JSON to false
	I0729 20:09:06.233440  755599 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13893,"bootTime":1722269853,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:09:06.233498  755599 start.go:139] virtualization: kvm guest
	I0729 20:09:06.235386  755599 out.go:177] * [ha-344518] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:09:06.236562  755599 notify.go:220] Checking for updates...
	I0729 20:09:06.236588  755599 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:09:06.238002  755599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:09:06.239211  755599 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:09:06.240449  755599 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:06.241551  755599 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:09:06.242850  755599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:09:06.244188  755599 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:09:06.278842  755599 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 20:09:06.280106  755599 start.go:297] selected driver: kvm2
	I0729 20:09:06.280121  755599 start.go:901] validating driver "kvm2" against <nil>
	I0729 20:09:06.280147  755599 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:09:06.280916  755599 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:09:06.280994  755599 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:09:06.296612  755599 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:09:06.296658  755599 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 20:09:06.296868  755599 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:09:06.296926  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:06.296937  755599 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0729 20:09:06.296945  755599 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 20:09:06.296993  755599 start.go:340] cluster config:
	{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0729 20:09:06.297084  755599 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:09:06.298814  755599 out.go:177] * Starting "ha-344518" primary control-plane node in "ha-344518" cluster
	I0729 20:09:06.299933  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:09:06.299968  755599 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:09:06.299979  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:09:06.300071  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:09:06.300082  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:09:06.300394  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:09:06.300421  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json: {Name:mk224013752309fc375b2d4f8dabe788d7615796 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:06.300553  755599 start.go:360] acquireMachinesLock for ha-344518: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:09:06.300579  755599 start.go:364] duration metric: took 14.513µs to acquireMachinesLock for "ha-344518"
	I0729 20:09:06.300594  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:09:06.300656  755599 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 20:09:06.302205  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:09:06.302327  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:09:06.302360  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:09:06.316692  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36309
	I0729 20:09:06.317211  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:09:06.317813  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:09:06.317837  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:09:06.318209  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:09:06.318430  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:06.318601  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:06.318777  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:09:06.318804  755599 client.go:168] LocalClient.Create starting
	I0729 20:09:06.318838  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:09:06.318870  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:09:06.318887  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:09:06.318949  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:09:06.318966  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:09:06.318979  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:09:06.318994  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:09:06.319006  755599 main.go:141] libmachine: (ha-344518) Calling .PreCreateCheck
	I0729 20:09:06.319328  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:06.319715  755599 main.go:141] libmachine: Creating machine...
	I0729 20:09:06.319729  755599 main.go:141] libmachine: (ha-344518) Calling .Create
	I0729 20:09:06.319853  755599 main.go:141] libmachine: (ha-344518) Creating KVM machine...
	I0729 20:09:06.320964  755599 main.go:141] libmachine: (ha-344518) DBG | found existing default KVM network
	I0729 20:09:06.321728  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.321596  755622 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f350}
	I0729 20:09:06.321744  755599 main.go:141] libmachine: (ha-344518) DBG | created network xml: 
	I0729 20:09:06.321754  755599 main.go:141] libmachine: (ha-344518) DBG | <network>
	I0729 20:09:06.321761  755599 main.go:141] libmachine: (ha-344518) DBG |   <name>mk-ha-344518</name>
	I0729 20:09:06.321770  755599 main.go:141] libmachine: (ha-344518) DBG |   <dns enable='no'/>
	I0729 20:09:06.321776  755599 main.go:141] libmachine: (ha-344518) DBG |   
	I0729 20:09:06.321785  755599 main.go:141] libmachine: (ha-344518) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 20:09:06.321793  755599 main.go:141] libmachine: (ha-344518) DBG |     <dhcp>
	I0729 20:09:06.321803  755599 main.go:141] libmachine: (ha-344518) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 20:09:06.321818  755599 main.go:141] libmachine: (ha-344518) DBG |     </dhcp>
	I0729 20:09:06.321856  755599 main.go:141] libmachine: (ha-344518) DBG |   </ip>
	I0729 20:09:06.321889  755599 main.go:141] libmachine: (ha-344518) DBG |   
	I0729 20:09:06.321972  755599 main.go:141] libmachine: (ha-344518) DBG | </network>
	I0729 20:09:06.321990  755599 main.go:141] libmachine: (ha-344518) DBG | 
	I0729 20:09:06.326724  755599 main.go:141] libmachine: (ha-344518) DBG | trying to create private KVM network mk-ha-344518 192.168.39.0/24...
	I0729 20:09:06.392240  755599 main.go:141] libmachine: (ha-344518) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 ...
	I0729 20:09:06.392278  755599 main.go:141] libmachine: (ha-344518) DBG | private KVM network mk-ha-344518 192.168.39.0/24 created
	I0729 20:09:06.392303  755599 main.go:141] libmachine: (ha-344518) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:09:06.392343  755599 main.go:141] libmachine: (ha-344518) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:09:06.392361  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.392092  755622 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:06.662139  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:06.662000  755622 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa...
	I0729 20:09:07.120112  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:07.119894  755622 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/ha-344518.rawdisk...
	I0729 20:09:07.120156  755599 main.go:141] libmachine: (ha-344518) DBG | Writing magic tar header
	I0729 20:09:07.120174  755599 main.go:141] libmachine: (ha-344518) DBG | Writing SSH key tar header
	I0729 20:09:07.120201  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:07.120077  755622 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 ...
	I0729 20:09:07.120218  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518
	I0729 20:09:07.120249  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:09:07.120266  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518 (perms=drwx------)
	I0729 20:09:07.120285  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:09:07.120299  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:09:07.120317  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:09:07.120328  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:09:07.120337  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:09:07.120346  755599 main.go:141] libmachine: (ha-344518) DBG | Checking permissions on dir: /home
	I0729 20:09:07.120362  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:09:07.120373  755599 main.go:141] libmachine: (ha-344518) DBG | Skipping /home - not owner
	I0729 20:09:07.120386  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:09:07.120400  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:09:07.120409  755599 main.go:141] libmachine: (ha-344518) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:09:07.120418  755599 main.go:141] libmachine: (ha-344518) Creating domain...
	I0729 20:09:07.121549  755599 main.go:141] libmachine: (ha-344518) define libvirt domain using xml: 
	I0729 20:09:07.121577  755599 main.go:141] libmachine: (ha-344518) <domain type='kvm'>
	I0729 20:09:07.121608  755599 main.go:141] libmachine: (ha-344518)   <name>ha-344518</name>
	I0729 20:09:07.121628  755599 main.go:141] libmachine: (ha-344518)   <memory unit='MiB'>2200</memory>
	I0729 20:09:07.121637  755599 main.go:141] libmachine: (ha-344518)   <vcpu>2</vcpu>
	I0729 20:09:07.121645  755599 main.go:141] libmachine: (ha-344518)   <features>
	I0729 20:09:07.121650  755599 main.go:141] libmachine: (ha-344518)     <acpi/>
	I0729 20:09:07.121658  755599 main.go:141] libmachine: (ha-344518)     <apic/>
	I0729 20:09:07.121663  755599 main.go:141] libmachine: (ha-344518)     <pae/>
	I0729 20:09:07.121672  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.121679  755599 main.go:141] libmachine: (ha-344518)   </features>
	I0729 20:09:07.121687  755599 main.go:141] libmachine: (ha-344518)   <cpu mode='host-passthrough'>
	I0729 20:09:07.121704  755599 main.go:141] libmachine: (ha-344518)   
	I0729 20:09:07.121712  755599 main.go:141] libmachine: (ha-344518)   </cpu>
	I0729 20:09:07.121716  755599 main.go:141] libmachine: (ha-344518)   <os>
	I0729 20:09:07.121720  755599 main.go:141] libmachine: (ha-344518)     <type>hvm</type>
	I0729 20:09:07.121725  755599 main.go:141] libmachine: (ha-344518)     <boot dev='cdrom'/>
	I0729 20:09:07.121732  755599 main.go:141] libmachine: (ha-344518)     <boot dev='hd'/>
	I0729 20:09:07.121737  755599 main.go:141] libmachine: (ha-344518)     <bootmenu enable='no'/>
	I0729 20:09:07.121743  755599 main.go:141] libmachine: (ha-344518)   </os>
	I0729 20:09:07.121748  755599 main.go:141] libmachine: (ha-344518)   <devices>
	I0729 20:09:07.121759  755599 main.go:141] libmachine: (ha-344518)     <disk type='file' device='cdrom'>
	I0729 20:09:07.121798  755599 main.go:141] libmachine: (ha-344518)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/boot2docker.iso'/>
	I0729 20:09:07.121834  755599 main.go:141] libmachine: (ha-344518)       <target dev='hdc' bus='scsi'/>
	I0729 20:09:07.121858  755599 main.go:141] libmachine: (ha-344518)       <readonly/>
	I0729 20:09:07.121871  755599 main.go:141] libmachine: (ha-344518)     </disk>
	I0729 20:09:07.121883  755599 main.go:141] libmachine: (ha-344518)     <disk type='file' device='disk'>
	I0729 20:09:07.121897  755599 main.go:141] libmachine: (ha-344518)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:09:07.121917  755599 main.go:141] libmachine: (ha-344518)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/ha-344518.rawdisk'/>
	I0729 20:09:07.121934  755599 main.go:141] libmachine: (ha-344518)       <target dev='hda' bus='virtio'/>
	I0729 20:09:07.121946  755599 main.go:141] libmachine: (ha-344518)     </disk>
	I0729 20:09:07.121957  755599 main.go:141] libmachine: (ha-344518)     <interface type='network'>
	I0729 20:09:07.121970  755599 main.go:141] libmachine: (ha-344518)       <source network='mk-ha-344518'/>
	I0729 20:09:07.121979  755599 main.go:141] libmachine: (ha-344518)       <model type='virtio'/>
	I0729 20:09:07.122010  755599 main.go:141] libmachine: (ha-344518)     </interface>
	I0729 20:09:07.122026  755599 main.go:141] libmachine: (ha-344518)     <interface type='network'>
	I0729 20:09:07.122038  755599 main.go:141] libmachine: (ha-344518)       <source network='default'/>
	I0729 20:09:07.122045  755599 main.go:141] libmachine: (ha-344518)       <model type='virtio'/>
	I0729 20:09:07.122055  755599 main.go:141] libmachine: (ha-344518)     </interface>
	I0729 20:09:07.122064  755599 main.go:141] libmachine: (ha-344518)     <serial type='pty'>
	I0729 20:09:07.122074  755599 main.go:141] libmachine: (ha-344518)       <target port='0'/>
	I0729 20:09:07.122092  755599 main.go:141] libmachine: (ha-344518)     </serial>
	I0729 20:09:07.122103  755599 main.go:141] libmachine: (ha-344518)     <console type='pty'>
	I0729 20:09:07.122114  755599 main.go:141] libmachine: (ha-344518)       <target type='serial' port='0'/>
	I0729 20:09:07.122134  755599 main.go:141] libmachine: (ha-344518)     </console>
	I0729 20:09:07.122146  755599 main.go:141] libmachine: (ha-344518)     <rng model='virtio'>
	I0729 20:09:07.122163  755599 main.go:141] libmachine: (ha-344518)       <backend model='random'>/dev/random</backend>
	I0729 20:09:07.122174  755599 main.go:141] libmachine: (ha-344518)     </rng>
	I0729 20:09:07.122184  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.122195  755599 main.go:141] libmachine: (ha-344518)     
	I0729 20:09:07.122205  755599 main.go:141] libmachine: (ha-344518)   </devices>
	I0729 20:09:07.122214  755599 main.go:141] libmachine: (ha-344518) </domain>
	I0729 20:09:07.122223  755599 main.go:141] libmachine: (ha-344518) 
	I0729 20:09:07.126629  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:b8:f5:36 in network default
	I0729 20:09:07.127217  755599 main.go:141] libmachine: (ha-344518) Ensuring networks are active...
	I0729 20:09:07.127238  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:07.127866  755599 main.go:141] libmachine: (ha-344518) Ensuring network default is active
	I0729 20:09:07.128161  755599 main.go:141] libmachine: (ha-344518) Ensuring network mk-ha-344518 is active
	I0729 20:09:07.128730  755599 main.go:141] libmachine: (ha-344518) Getting domain xml...
	I0729 20:09:07.129444  755599 main.go:141] libmachine: (ha-344518) Creating domain...
	I0729 20:09:08.325465  755599 main.go:141] libmachine: (ha-344518) Waiting to get IP...
	I0729 20:09:08.326138  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.326574  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.326614  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.326561  755622 retry.go:31] will retry after 224.638769ms: waiting for machine to come up
	I0729 20:09:08.553151  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.553679  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.553709  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.553642  755622 retry.go:31] will retry after 360.458872ms: waiting for machine to come up
	I0729 20:09:08.915165  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:08.915618  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:08.915650  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:08.915542  755622 retry.go:31] will retry after 382.171333ms: waiting for machine to come up
	I0729 20:09:09.299192  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:09.299704  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:09.299726  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:09.299643  755622 retry.go:31] will retry after 574.829345ms: waiting for machine to come up
	I0729 20:09:09.876480  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:09.876900  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:09.876929  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:09.876838  755622 retry.go:31] will retry after 617.694165ms: waiting for machine to come up
	I0729 20:09:10.495627  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:10.496026  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:10.496077  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:10.495986  755622 retry.go:31] will retry after 847.62874ms: waiting for machine to come up
	I0729 20:09:11.345637  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:11.346047  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:11.346086  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:11.345988  755622 retry.go:31] will retry after 1.112051252s: waiting for machine to come up
	I0729 20:09:12.460263  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:12.460801  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:12.460828  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:12.460733  755622 retry.go:31] will retry after 1.450822293s: waiting for machine to come up
	I0729 20:09:13.913413  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:13.913807  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:13.913837  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:13.913758  755622 retry.go:31] will retry after 1.204942537s: waiting for machine to come up
	I0729 20:09:15.120158  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:15.120563  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:15.120597  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:15.120536  755622 retry.go:31] will retry after 1.553270386s: waiting for machine to come up
	I0729 20:09:16.675191  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:16.675649  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:16.675680  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:16.675591  755622 retry.go:31] will retry after 2.793041861s: waiting for machine to come up
	I0729 20:09:19.472545  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:19.472921  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:19.472942  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:19.472867  755622 retry.go:31] will retry after 2.196371552s: waiting for machine to come up
	I0729 20:09:21.670777  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:21.671128  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find current IP address of domain ha-344518 in network mk-ha-344518
	I0729 20:09:21.671160  755599 main.go:141] libmachine: (ha-344518) DBG | I0729 20:09:21.671075  755622 retry.go:31] will retry after 4.263171271s: waiting for machine to come up
	I0729 20:09:25.939488  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:25.940003  755599 main.go:141] libmachine: (ha-344518) Found IP for machine: 192.168.39.238
	I0729 20:09:25.940019  755599 main.go:141] libmachine: (ha-344518) Reserving static IP address...
	I0729 20:09:25.940057  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has current primary IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:25.940425  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find host DHCP lease matching {name: "ha-344518", mac: "52:54:00:e2:94:80", ip: "192.168.39.238"} in network mk-ha-344518
	I0729 20:09:26.013283  755599 main.go:141] libmachine: (ha-344518) DBG | Getting to WaitForSSH function...
	I0729 20:09:26.013316  755599 main.go:141] libmachine: (ha-344518) Reserved static IP address: 192.168.39.238
	I0729 20:09:26.013329  755599 main.go:141] libmachine: (ha-344518) Waiting for SSH to be available...
	I0729 20:09:26.016100  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:26.016491  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518
	I0729 20:09:26.016526  755599 main.go:141] libmachine: (ha-344518) DBG | unable to find defined IP address of network mk-ha-344518 interface with MAC address 52:54:00:e2:94:80
	I0729 20:09:26.016697  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH client type: external
	I0729 20:09:26.016723  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa (-rw-------)
	I0729 20:09:26.016752  755599 main.go:141] libmachine: (ha-344518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:09:26.016767  755599 main.go:141] libmachine: (ha-344518) DBG | About to run SSH command:
	I0729 20:09:26.016778  755599 main.go:141] libmachine: (ha-344518) DBG | exit 0
	I0729 20:09:26.020608  755599 main.go:141] libmachine: (ha-344518) DBG | SSH cmd err, output: exit status 255: 
	I0729 20:09:26.020626  755599 main.go:141] libmachine: (ha-344518) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 20:09:26.020633  755599 main.go:141] libmachine: (ha-344518) DBG | command : exit 0
	I0729 20:09:26.020641  755599 main.go:141] libmachine: (ha-344518) DBG | err     : exit status 255
	I0729 20:09:26.020651  755599 main.go:141] libmachine: (ha-344518) DBG | output  : 
	I0729 20:09:29.021997  755599 main.go:141] libmachine: (ha-344518) DBG | Getting to WaitForSSH function...
	I0729 20:09:29.024803  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.025367  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.025408  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.025586  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH client type: external
	I0729 20:09:29.025624  755599 main.go:141] libmachine: (ha-344518) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa (-rw-------)
	I0729 20:09:29.025655  755599 main.go:141] libmachine: (ha-344518) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.238 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:09:29.025670  755599 main.go:141] libmachine: (ha-344518) DBG | About to run SSH command:
	I0729 20:09:29.025683  755599 main.go:141] libmachine: (ha-344518) DBG | exit 0
	I0729 20:09:29.147987  755599 main.go:141] libmachine: (ha-344518) DBG | SSH cmd err, output: <nil>: 
	I0729 20:09:29.148225  755599 main.go:141] libmachine: (ha-344518) KVM machine creation complete!
	I0729 20:09:29.148740  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:29.149286  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:29.149482  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:29.149639  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:09:29.149657  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:09:29.150765  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:09:29.150780  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:09:29.150786  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:09:29.150792  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.153178  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.153584  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.153629  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.153741  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.153910  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.154078  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.154233  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.154381  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.154599  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.154615  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:09:29.255168  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:09:29.255191  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:09:29.255198  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.258198  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.258528  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.258570  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.258733  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.258956  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.259147  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.259303  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.259460  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.259658  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.259671  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:09:29.360632  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:09:29.360702  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:09:29.360709  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:09:29.360717  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.360983  755599 buildroot.go:166] provisioning hostname "ha-344518"
	I0729 20:09:29.361016  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.361230  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.363712  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.364003  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.364024  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.364212  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.364387  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.364632  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.364808  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.364994  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.365155  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.365166  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518 && echo "ha-344518" | sudo tee /etc/hostname
	I0729 20:09:29.482065  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:09:29.482099  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.485276  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.485636  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.485664  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.485828  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.486070  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.486314  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.486479  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.486680  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.486859  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.486876  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:09:29.596714  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:09:29.596745  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:09:29.596764  755599 buildroot.go:174] setting up certificates
	I0729 20:09:29.596775  755599 provision.go:84] configureAuth start
	I0729 20:09:29.596783  755599 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:09:29.597068  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:29.599699  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.600142  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.600171  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.600336  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.602797  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.603076  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.603123  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.603328  755599 provision.go:143] copyHostCerts
	I0729 20:09:29.603364  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:09:29.603407  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:09:29.603420  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:09:29.603500  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:09:29.603609  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:09:29.603644  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:09:29.603655  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:09:29.603697  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:09:29.603760  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:09:29.603788  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:09:29.603798  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:09:29.603831  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:09:29.603894  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518 san=[127.0.0.1 192.168.39.238 ha-344518 localhost minikube]
	I0729 20:09:29.704896  755599 provision.go:177] copyRemoteCerts
	I0729 20:09:29.704996  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:09:29.705021  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.707815  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.708151  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.708173  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.708381  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.708562  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.708701  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.708815  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:29.789970  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:09:29.790054  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:09:29.811978  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:09:29.812070  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 20:09:29.833425  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:09:29.833516  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:09:29.855095  755599 provision.go:87] duration metric: took 258.307019ms to configureAuth
	I0729 20:09:29.855125  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:09:29.855328  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:09:29.855418  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:29.858154  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.858489  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:29.858515  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:29.858679  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:29.858885  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.859022  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:29.859206  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:29.859347  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:29.859508  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:29.859530  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:09:30.108935  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:09:30.108960  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:09:30.108969  755599 main.go:141] libmachine: (ha-344518) Calling .GetURL
	I0729 20:09:30.110328  755599 main.go:141] libmachine: (ha-344518) DBG | Using libvirt version 6000000
	I0729 20:09:30.112412  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.112803  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.112831  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.112994  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:09:30.113013  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:09:30.113020  755599 client.go:171] duration metric: took 23.794206805s to LocalClient.Create
	I0729 20:09:30.113043  755599 start.go:167] duration metric: took 23.79426731s to libmachine.API.Create "ha-344518"
	I0729 20:09:30.113053  755599 start.go:293] postStartSetup for "ha-344518" (driver="kvm2")
	I0729 20:09:30.113062  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:09:30.113077  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.113372  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:09:30.113421  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.115495  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.115798  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.115825  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.116023  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.116223  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.116398  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.116596  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.198103  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:09:30.202176  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:09:30.202216  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:09:30.202297  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:09:30.202392  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:09:30.202404  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:09:30.202493  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:09:30.211254  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:09:30.233268  755599 start.go:296] duration metric: took 120.202296ms for postStartSetup
	I0729 20:09:30.233331  755599 main.go:141] libmachine: (ha-344518) Calling .GetConfigRaw
	I0729 20:09:30.234049  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:30.236633  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.236926  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.236972  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.237181  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:09:30.237355  755599 start.go:128] duration metric: took 23.936687923s to createHost
	I0729 20:09:30.237381  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.239552  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.239809  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.239842  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.239987  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.240179  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.240344  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.240483  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.240647  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:09:30.240821  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:09:30.240831  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:09:30.344583  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283770.321268422
	
	I0729 20:09:30.344619  755599 fix.go:216] guest clock: 1722283770.321268422
	I0729 20:09:30.344627  755599 fix.go:229] Guest: 2024-07-29 20:09:30.321268422 +0000 UTC Remote: 2024-07-29 20:09:30.237366573 +0000 UTC m=+24.042639080 (delta=83.901849ms)
	I0729 20:09:30.344649  755599 fix.go:200] guest clock delta is within tolerance: 83.901849ms
	I0729 20:09:30.344655  755599 start.go:83] releasing machines lock for "ha-344518", held for 24.044068964s
	I0729 20:09:30.344677  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.344929  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:30.347733  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.348070  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.348103  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.348263  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.348804  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.348977  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:09:30.349086  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:09:30.349152  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.349175  755599 ssh_runner.go:195] Run: cat /version.json
	I0729 20:09:30.349199  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:09:30.352011  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352060  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352365  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.352393  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352424  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:30.352444  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:30.352520  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.352709  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:09:30.352726  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.352868  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.352878  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:09:30.353002  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:09:30.353079  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.353122  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:09:30.457590  755599 ssh_runner.go:195] Run: systemctl --version
	I0729 20:09:30.463336  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:09:30.618446  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:09:30.624168  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:09:30.624257  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:09:30.639417  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:09:30.639452  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:09:30.639529  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:09:30.656208  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:09:30.669079  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:09:30.669165  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:09:30.682267  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:09:30.695146  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:09:30.801367  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:09:30.933238  755599 docker.go:232] disabling docker service ...
	I0729 20:09:30.933329  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:09:30.946563  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:09:30.958984  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:09:31.083789  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:09:31.192813  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:09:31.208251  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:09:31.226231  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:09:31.226295  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.236691  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:09:31.236766  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.246449  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.256666  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.266826  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:09:31.276417  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.285691  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.300856  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:09:31.310029  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:09:31.318257  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:09:31.318321  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:09:31.329044  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:09:31.337242  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:09:31.439976  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:09:31.568009  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:09:31.568114  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:09:31.572733  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:09:31.572795  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:09:31.576009  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:09:31.612236  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:09:31.612336  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:09:31.637427  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:09:31.663928  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:09:31.665127  755599 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:09:31.667692  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:31.667981  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:09:31.668000  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:09:31.668234  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:09:31.672061  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:09:31.684203  755599 kubeadm.go:883] updating cluster {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:09:31.684303  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:09:31.684354  755599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:09:31.713791  755599 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.30.3". assuming images are not preloaded.
	I0729 20:09:31.713860  755599 ssh_runner.go:195] Run: which lz4
	I0729 20:09:31.717278  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0729 20:09:31.717389  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 20:09:31.721078  755599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 20:09:31.721114  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (406200976 bytes)
	I0729 20:09:32.888232  755599 crio.go:462] duration metric: took 1.170872647s to copy over tarball
	I0729 20:09:32.888342  755599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 20:09:34.911526  755599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.023148425s)
	I0729 20:09:34.911564  755599 crio.go:469] duration metric: took 2.023293724s to extract the tarball
	I0729 20:09:34.911572  755599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 20:09:34.949385  755599 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:09:34.996988  755599 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:09:34.997024  755599 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:09:34.997039  755599 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.3 crio true true} ...
	I0729 20:09:34.997188  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:09:34.997274  755599 ssh_runner.go:195] Run: crio config
	I0729 20:09:35.039660  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:35.039682  755599 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 20:09:35.039693  755599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:09:35.039715  755599 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344518 NodeName:ha-344518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:09:35.039844  755599 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:09:35.039866  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:09:35.039914  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:09:35.054787  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:09:35.054924  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:09:35.055003  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:09:35.064723  755599 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:09:35.064797  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 20:09:35.073848  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 20:09:35.088657  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:09:35.103369  755599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 20:09:35.118598  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0729 20:09:35.133021  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:09:35.136443  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:09:35.147245  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:09:35.272541  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:09:35.287804  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.238
	I0729 20:09:35.287823  755599 certs.go:194] generating shared ca certs ...
	I0729 20:09:35.287839  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.287986  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:09:35.288021  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:09:35.288049  755599 certs.go:256] generating profile certs ...
	I0729 20:09:35.288127  755599 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:09:35.288146  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt with IP's: []
	I0729 20:09:35.800414  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt ...
	I0729 20:09:35.800449  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt: {Name:mka4861ceb4d2b4f4f8e00578a58573ad449da85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.800649  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key ...
	I0729 20:09:35.800665  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key: {Name:mkc963128b999a495ef61bfb68512b3764f6d860 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.800770  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5
	I0729 20:09:35.800790  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.254]
	I0729 20:09:35.908817  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 ...
	I0729 20:09:35.908862  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5: {Name:mk1a566c5922b43f8e6d1c091786f27e0530099b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.909074  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5 ...
	I0729 20:09:35.909098  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5: {Name:mk8d83972e312290d7873f49017743d9eba53fc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:35.909210  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.3e09c1c5 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:09:35.909349  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.3e09c1c5 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:09:35.909454  755599 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:09:35.909478  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt with IP's: []
	I0729 20:09:36.165670  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt ...
	I0729 20:09:36.165713  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt: {Name:mk37e45b34dcfba0257c9845376f02e95587a990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:36.165909  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key ...
	I0729 20:09:36.165925  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key: {Name:mk99e5bb71e0f27e47589639f230663907745de0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:09:36.166020  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:09:36.166044  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:09:36.166060  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:09:36.166080  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:09:36.166096  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:09:36.166115  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:09:36.166136  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:09:36.166156  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:09:36.166231  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:09:36.166282  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:09:36.166295  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:09:36.166333  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:09:36.166366  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:09:36.166404  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:09:36.166461  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:09:36.166500  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.166520  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.166540  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.167816  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:09:36.193720  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:09:36.214608  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:09:36.258795  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:09:36.280321  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 20:09:36.301377  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 20:09:36.322311  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:09:36.346369  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:09:36.370285  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:09:36.393965  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:09:36.417695  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:09:36.441939  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:09:36.458556  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:09:36.464155  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:09:36.473986  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.478117  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.478159  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:09:36.483562  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:09:36.493208  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:09:36.502511  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.506272  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.506324  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:09:36.511358  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:09:36.520556  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:09:36.529563  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.533411  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.533459  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:09:36.538498  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:09:36.547613  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:09:36.551115  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:09:36.551173  755599 kubeadm.go:392] StartCluster: {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:09:36.551254  755599 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:09:36.551306  755599 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:09:36.585331  755599 cri.go:89] found id: ""
	I0729 20:09:36.585402  755599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 20:09:36.594338  755599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 20:09:36.602921  755599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 20:09:36.612154  755599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 20:09:36.612174  755599 kubeadm.go:157] found existing configuration files:
	
	I0729 20:09:36.612213  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 20:09:36.621367  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 20:09:36.621424  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 20:09:36.631445  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 20:09:36.641207  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 20:09:36.641263  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 20:09:36.651213  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 20:09:36.660732  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 20:09:36.660794  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 20:09:36.670772  755599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 20:09:36.680213  755599 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 20:09:36.680261  755599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 20:09:36.690019  755599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 20:09:36.791377  755599 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 20:09:36.791450  755599 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 20:09:36.934210  755599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 20:09:36.934358  755599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 20:09:36.934470  755599 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 20:09:37.145429  755599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 20:09:37.261585  755599 out.go:204]   - Generating certificates and keys ...
	I0729 20:09:37.261702  755599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 20:09:37.261764  755599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 20:09:37.369535  755599 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 20:09:37.493916  755599 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 20:09:37.819344  755599 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 20:09:38.049749  755599 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 20:09:38.109721  755599 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 20:09:38.109958  755599 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-344518 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0729 20:09:38.237477  755599 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 20:09:38.237784  755599 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-344518 localhost] and IPs [192.168.39.238 127.0.0.1 ::1]
	I0729 20:09:38.391581  755599 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 20:09:38.620918  755599 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 20:09:38.819819  755599 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 20:09:38.820100  755599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 20:09:39.226621  755599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 20:09:39.506614  755599 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 20:09:39.675030  755599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 20:09:39.813232  755599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 20:09:40.000149  755599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 20:09:40.000850  755599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 20:09:40.003796  755599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 20:09:40.005618  755599 out.go:204]   - Booting up control plane ...
	I0729 20:09:40.005729  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 20:09:40.005821  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 20:09:40.006162  755599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 20:09:40.027464  755599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 20:09:40.028255  755599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 20:09:40.028317  755599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 20:09:40.157807  755599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 20:09:40.157940  755599 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 20:09:41.158624  755599 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001286373s
	I0729 20:09:41.158748  755599 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 20:09:46.897953  755599 kubeadm.go:310] [api-check] The API server is healthy after 5.742089048s
	I0729 20:09:46.910263  755599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 20:09:46.955430  755599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 20:09:46.982828  755599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 20:09:46.983075  755599 kubeadm.go:310] [mark-control-plane] Marking the node ha-344518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 20:09:46.995548  755599 kubeadm.go:310] [bootstrap-token] Using token: lcul30.lktilqyd6grpi0f8
	I0729 20:09:46.997450  755599 out.go:204]   - Configuring RBAC rules ...
	I0729 20:09:46.997610  755599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 20:09:47.009334  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 20:09:47.018324  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 20:09:47.022284  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 20:09:47.026819  755599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 20:09:47.029990  755599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 20:09:47.306455  755599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 20:09:47.731708  755599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 20:09:48.305722  755599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 20:09:48.306832  755599 kubeadm.go:310] 
	I0729 20:09:48.306923  755599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 20:09:48.306936  755599 kubeadm.go:310] 
	I0729 20:09:48.307036  755599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 20:09:48.307049  755599 kubeadm.go:310] 
	I0729 20:09:48.307091  755599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 20:09:48.307166  755599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 20:09:48.307230  755599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 20:09:48.307240  755599 kubeadm.go:310] 
	I0729 20:09:48.307341  755599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 20:09:48.307362  755599 kubeadm.go:310] 
	I0729 20:09:48.307430  755599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 20:09:48.307441  755599 kubeadm.go:310] 
	I0729 20:09:48.307519  755599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 20:09:48.307628  755599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 20:09:48.307745  755599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 20:09:48.307765  755599 kubeadm.go:310] 
	I0729 20:09:48.307896  755599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 20:09:48.308052  755599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 20:09:48.308063  755599 kubeadm.go:310] 
	I0729 20:09:48.308190  755599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lcul30.lktilqyd6grpi0f8 \
	I0729 20:09:48.308329  755599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 \
	I0729 20:09:48.308364  755599 kubeadm.go:310] 	--control-plane 
	I0729 20:09:48.308371  755599 kubeadm.go:310] 
	I0729 20:09:48.308465  755599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 20:09:48.308480  755599 kubeadm.go:310] 
	I0729 20:09:48.308584  755599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lcul30.lktilqyd6grpi0f8 \
	I0729 20:09:48.308756  755599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 
	I0729 20:09:48.308937  755599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 20:09:48.308953  755599 cni.go:84] Creating CNI manager for ""
	I0729 20:09:48.308965  755599 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0729 20:09:48.311540  755599 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 20:09:48.312840  755599 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 20:09:48.317892  755599 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 20:09:48.317910  755599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 20:09:48.335029  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 20:09:48.651905  755599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 20:09:48.652057  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518 minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=true
	I0729 20:09:48.652059  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:48.671597  755599 ops.go:34] apiserver oom_adj: -16
	I0729 20:09:48.802213  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:49.302225  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:49.802558  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:50.302810  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:50.802819  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:51.302714  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:51.802603  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:52.302970  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:52.803281  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:53.302597  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:53.802801  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:54.302351  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:54.803097  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:55.302327  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:55.803096  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:56.302252  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:56.802499  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:57.303044  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:57.803175  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:58.302637  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:58.803287  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:59.303208  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:09:59.802965  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.303191  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.802305  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 20:10:00.876177  755599 kubeadm.go:1113] duration metric: took 12.224218004s to wait for elevateKubeSystemPrivileges
	I0729 20:10:00.876216  755599 kubeadm.go:394] duration metric: took 24.325047279s to StartCluster
	I0729 20:10:00.876241  755599 settings.go:142] acquiring lock: {Name:mk9a2eb797f60b19768f4bfa250a8d2214a5ca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:00.876354  755599 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:10:00.877048  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:00.877284  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 20:10:00.877294  755599 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:00.877338  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:10:00.877348  755599 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 20:10:00.877412  755599 addons.go:69] Setting storage-provisioner=true in profile "ha-344518"
	I0729 20:10:00.877426  755599 addons.go:69] Setting default-storageclass=true in profile "ha-344518"
	I0729 20:10:00.877448  755599 addons.go:234] Setting addon storage-provisioner=true in "ha-344518"
	I0729 20:10:00.877451  755599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-344518"
	I0729 20:10:00.877497  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:00.877578  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:00.877922  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.877975  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.877922  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.878077  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.893520  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0729 20:10:00.893530  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43087
	I0729 20:10:00.893998  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.894081  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.894569  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.894581  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.894593  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.894598  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.894956  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.894969  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.895228  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.895495  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.895530  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.897514  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:10:00.897876  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 20:10:00.898435  755599 cert_rotation.go:137] Starting client certificate rotation controller
	I0729 20:10:00.898810  755599 addons.go:234] Setting addon default-storageclass=true in "ha-344518"
	I0729 20:10:00.898860  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:00.899242  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.899284  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.911267  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43687
	I0729 20:10:00.911791  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.912332  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.912354  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.912736  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.912957  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.913755  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42901
	I0729 20:10:00.914382  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.914921  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.914945  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.914984  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:00.915304  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.915786  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:00.915821  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:00.917371  755599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:10:00.918684  755599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:10:00.918700  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 20:10:00.918715  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:00.921771  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.922271  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:00.922292  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.922546  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:00.922724  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:00.922883  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:00.923002  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:00.931534  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45715
	I0729 20:10:00.931981  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:00.932645  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:00.932670  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:00.933000  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:00.933185  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:00.934837  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:00.935030  755599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 20:10:00.935044  755599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 20:10:00.935058  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:00.937812  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.938208  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:00.938239  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:00.938366  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:00.938551  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:00.938709  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:00.938855  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:00.964873  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 20:10:01.029080  755599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:10:01.085656  755599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 20:10:01.389685  755599 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0729 20:10:01.617106  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617139  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617119  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617203  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617466  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617486  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617495  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617503  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617502  755599 main.go:141] libmachine: (ha-344518) DBG | Closing plugin on server side
	I0729 20:10:01.617468  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617521  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617530  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.617537  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.617817  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617831  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.617832  755599 main.go:141] libmachine: (ha-344518) DBG | Closing plugin on server side
	I0729 20:10:01.617874  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.617894  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.618040  755599 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0729 20:10:01.618048  755599 round_trippers.go:469] Request Headers:
	I0729 20:10:01.618058  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:10:01.618062  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:10:01.632115  755599 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 20:10:01.632861  755599 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0729 20:10:01.632878  755599 round_trippers.go:469] Request Headers:
	I0729 20:10:01.632888  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:10:01.632895  755599 round_trippers.go:473]     Content-Type: application/json
	I0729 20:10:01.632899  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:10:01.635477  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:10:01.635717  755599 main.go:141] libmachine: Making call to close driver server
	I0729 20:10:01.635741  755599 main.go:141] libmachine: (ha-344518) Calling .Close
	I0729 20:10:01.636016  755599 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:10:01.636045  755599 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:10:01.637761  755599 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0729 20:10:01.639042  755599 addons.go:510] duration metric: took 761.689784ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0729 20:10:01.639081  755599 start.go:246] waiting for cluster config update ...
	I0729 20:10:01.639105  755599 start.go:255] writing updated cluster config ...
	I0729 20:10:01.640969  755599 out.go:177] 
	I0729 20:10:01.641988  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:01.642051  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:01.643862  755599 out.go:177] * Starting "ha-344518-m02" control-plane node in "ha-344518" cluster
	I0729 20:10:01.645038  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:10:01.645067  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:10:01.645164  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:10:01.645177  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:10:01.645244  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:01.645427  755599 start.go:360] acquireMachinesLock for ha-344518-m02: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:10:01.645476  755599 start.go:364] duration metric: took 27.961µs to acquireMachinesLock for "ha-344518-m02"
	I0729 20:10:01.645496  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:01.645575  755599 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0729 20:10:01.647191  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:10:01.647291  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:01.647328  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:01.662983  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46835
	I0729 20:10:01.663480  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:01.664045  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:01.664072  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:01.664434  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:01.664664  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:01.664850  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:01.665032  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:10:01.665101  755599 client.go:168] LocalClient.Create starting
	I0729 20:10:01.665140  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:10:01.665178  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:10:01.665197  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:10:01.665270  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:10:01.665319  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:10:01.665339  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:10:01.665367  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:10:01.665377  755599 main.go:141] libmachine: (ha-344518-m02) Calling .PreCreateCheck
	I0729 20:10:01.665585  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:01.665951  755599 main.go:141] libmachine: Creating machine...
	I0729 20:10:01.665966  755599 main.go:141] libmachine: (ha-344518-m02) Calling .Create
	I0729 20:10:01.666103  755599 main.go:141] libmachine: (ha-344518-m02) Creating KVM machine...
	I0729 20:10:01.667399  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found existing default KVM network
	I0729 20:10:01.667524  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found existing private KVM network mk-ha-344518
	I0729 20:10:01.667685  755599 main.go:141] libmachine: (ha-344518-m02) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 ...
	I0729 20:10:01.667705  755599 main.go:141] libmachine: (ha-344518-m02) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:10:01.667733  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:01.667657  756009 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:10:01.667887  755599 main.go:141] libmachine: (ha-344518-m02) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:10:01.948848  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:01.948699  756009 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa...
	I0729 20:10:02.042832  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:02.042689  756009 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/ha-344518-m02.rawdisk...
	I0729 20:10:02.042863  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Writing magic tar header
	I0729 20:10:02.042878  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Writing SSH key tar header
	I0729 20:10:02.042960  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:02.042878  756009 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 ...
	I0729 20:10:02.043030  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02
	I0729 20:10:02.043050  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:10:02.043063  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02 (perms=drwx------)
	I0729 20:10:02.043081  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:10:02.043093  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:10:02.043115  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:10:02.043128  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:10:02.043143  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:10:02.043157  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:10:02.043167  755599 main.go:141] libmachine: (ha-344518-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:10:02.043182  755599 main.go:141] libmachine: (ha-344518-m02) Creating domain...
	I0729 20:10:02.043199  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:10:02.043213  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:10:02.043223  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Checking permissions on dir: /home
	I0729 20:10:02.043234  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Skipping /home - not owner
	I0729 20:10:02.044411  755599 main.go:141] libmachine: (ha-344518-m02) define libvirt domain using xml: 
	I0729 20:10:02.044429  755599 main.go:141] libmachine: (ha-344518-m02) <domain type='kvm'>
	I0729 20:10:02.044439  755599 main.go:141] libmachine: (ha-344518-m02)   <name>ha-344518-m02</name>
	I0729 20:10:02.044446  755599 main.go:141] libmachine: (ha-344518-m02)   <memory unit='MiB'>2200</memory>
	I0729 20:10:02.044453  755599 main.go:141] libmachine: (ha-344518-m02)   <vcpu>2</vcpu>
	I0729 20:10:02.044459  755599 main.go:141] libmachine: (ha-344518-m02)   <features>
	I0729 20:10:02.044474  755599 main.go:141] libmachine: (ha-344518-m02)     <acpi/>
	I0729 20:10:02.044485  755599 main.go:141] libmachine: (ha-344518-m02)     <apic/>
	I0729 20:10:02.044495  755599 main.go:141] libmachine: (ha-344518-m02)     <pae/>
	I0729 20:10:02.044503  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.044529  755599 main.go:141] libmachine: (ha-344518-m02)   </features>
	I0729 20:10:02.044551  755599 main.go:141] libmachine: (ha-344518-m02)   <cpu mode='host-passthrough'>
	I0729 20:10:02.044558  755599 main.go:141] libmachine: (ha-344518-m02)   
	I0729 20:10:02.044569  755599 main.go:141] libmachine: (ha-344518-m02)   </cpu>
	I0729 20:10:02.044575  755599 main.go:141] libmachine: (ha-344518-m02)   <os>
	I0729 20:10:02.044581  755599 main.go:141] libmachine: (ha-344518-m02)     <type>hvm</type>
	I0729 20:10:02.044586  755599 main.go:141] libmachine: (ha-344518-m02)     <boot dev='cdrom'/>
	I0729 20:10:02.044655  755599 main.go:141] libmachine: (ha-344518-m02)     <boot dev='hd'/>
	I0729 20:10:02.044661  755599 main.go:141] libmachine: (ha-344518-m02)     <bootmenu enable='no'/>
	I0729 20:10:02.044666  755599 main.go:141] libmachine: (ha-344518-m02)   </os>
	I0729 20:10:02.044671  755599 main.go:141] libmachine: (ha-344518-m02)   <devices>
	I0729 20:10:02.044681  755599 main.go:141] libmachine: (ha-344518-m02)     <disk type='file' device='cdrom'>
	I0729 20:10:02.044697  755599 main.go:141] libmachine: (ha-344518-m02)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/boot2docker.iso'/>
	I0729 20:10:02.044708  755599 main.go:141] libmachine: (ha-344518-m02)       <target dev='hdc' bus='scsi'/>
	I0729 20:10:02.044717  755599 main.go:141] libmachine: (ha-344518-m02)       <readonly/>
	I0729 20:10:02.044722  755599 main.go:141] libmachine: (ha-344518-m02)     </disk>
	I0729 20:10:02.044756  755599 main.go:141] libmachine: (ha-344518-m02)     <disk type='file' device='disk'>
	I0729 20:10:02.044790  755599 main.go:141] libmachine: (ha-344518-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:10:02.044806  755599 main.go:141] libmachine: (ha-344518-m02)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/ha-344518-m02.rawdisk'/>
	I0729 20:10:02.044816  755599 main.go:141] libmachine: (ha-344518-m02)       <target dev='hda' bus='virtio'/>
	I0729 20:10:02.044822  755599 main.go:141] libmachine: (ha-344518-m02)     </disk>
	I0729 20:10:02.044831  755599 main.go:141] libmachine: (ha-344518-m02)     <interface type='network'>
	I0729 20:10:02.044838  755599 main.go:141] libmachine: (ha-344518-m02)       <source network='mk-ha-344518'/>
	I0729 20:10:02.044843  755599 main.go:141] libmachine: (ha-344518-m02)       <model type='virtio'/>
	I0729 20:10:02.044852  755599 main.go:141] libmachine: (ha-344518-m02)     </interface>
	I0729 20:10:02.044863  755599 main.go:141] libmachine: (ha-344518-m02)     <interface type='network'>
	I0729 20:10:02.044898  755599 main.go:141] libmachine: (ha-344518-m02)       <source network='default'/>
	I0729 20:10:02.044917  755599 main.go:141] libmachine: (ha-344518-m02)       <model type='virtio'/>
	I0729 20:10:02.044931  755599 main.go:141] libmachine: (ha-344518-m02)     </interface>
	I0729 20:10:02.044947  755599 main.go:141] libmachine: (ha-344518-m02)     <serial type='pty'>
	I0729 20:10:02.044960  755599 main.go:141] libmachine: (ha-344518-m02)       <target port='0'/>
	I0729 20:10:02.044971  755599 main.go:141] libmachine: (ha-344518-m02)     </serial>
	I0729 20:10:02.044984  755599 main.go:141] libmachine: (ha-344518-m02)     <console type='pty'>
	I0729 20:10:02.044996  755599 main.go:141] libmachine: (ha-344518-m02)       <target type='serial' port='0'/>
	I0729 20:10:02.045008  755599 main.go:141] libmachine: (ha-344518-m02)     </console>
	I0729 20:10:02.045028  755599 main.go:141] libmachine: (ha-344518-m02)     <rng model='virtio'>
	I0729 20:10:02.045040  755599 main.go:141] libmachine: (ha-344518-m02)       <backend model='random'>/dev/random</backend>
	I0729 20:10:02.045051  755599 main.go:141] libmachine: (ha-344518-m02)     </rng>
	I0729 20:10:02.045061  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.045075  755599 main.go:141] libmachine: (ha-344518-m02)     
	I0729 20:10:02.045091  755599 main.go:141] libmachine: (ha-344518-m02)   </devices>
	I0729 20:10:02.045102  755599 main.go:141] libmachine: (ha-344518-m02) </domain>
	I0729 20:10:02.045114  755599 main.go:141] libmachine: (ha-344518-m02) 
	I0729 20:10:02.053275  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:0a:c1:60 in network default
	I0729 20:10:02.053938  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:02.053961  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring networks are active...
	I0729 20:10:02.054689  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring network default is active
	I0729 20:10:02.054979  755599 main.go:141] libmachine: (ha-344518-m02) Ensuring network mk-ha-344518 is active
	I0729 20:10:02.055318  755599 main.go:141] libmachine: (ha-344518-m02) Getting domain xml...
	I0729 20:10:02.056051  755599 main.go:141] libmachine: (ha-344518-m02) Creating domain...
	I0729 20:10:03.314645  755599 main.go:141] libmachine: (ha-344518-m02) Waiting to get IP...
	I0729 20:10:03.315559  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.316122  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.316150  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.316088  756009 retry.go:31] will retry after 216.191206ms: waiting for machine to come up
	I0729 20:10:03.533518  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.533951  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.533974  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.533889  756009 retry.go:31] will retry after 265.56964ms: waiting for machine to come up
	I0729 20:10:03.801430  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:03.801916  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:03.801953  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:03.801874  756009 retry.go:31] will retry after 377.103233ms: waiting for machine to come up
	I0729 20:10:04.180447  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:04.180994  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:04.181028  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:04.180923  756009 retry.go:31] will retry after 575.646899ms: waiting for machine to come up
	I0729 20:10:04.758309  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:04.758860  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:04.758893  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:04.758784  756009 retry.go:31] will retry after 493.74167ms: waiting for machine to come up
	I0729 20:10:05.254611  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:05.255019  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:05.255049  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:05.254958  756009 retry.go:31] will retry after 573.46082ms: waiting for machine to come up
	I0729 20:10:05.829842  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:05.830364  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:05.830393  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:05.830239  756009 retry.go:31] will retry after 958.136426ms: waiting for machine to come up
	I0729 20:10:06.790708  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:06.791203  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:06.791233  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:06.791140  756009 retry.go:31] will retry after 1.232792133s: waiting for machine to come up
	I0729 20:10:08.025788  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:08.026198  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:08.026221  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:08.026156  756009 retry.go:31] will retry after 1.770457566s: waiting for machine to come up
	I0729 20:10:09.797886  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:09.798308  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:09.798331  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:09.798245  756009 retry.go:31] will retry after 1.820441853s: waiting for machine to come up
	I0729 20:10:11.621110  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:11.621620  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:11.621650  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:11.621571  756009 retry.go:31] will retry after 1.80956907s: waiting for machine to come up
	I0729 20:10:13.433238  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:13.433725  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:13.433747  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:13.433687  756009 retry.go:31] will retry after 3.393381444s: waiting for machine to come up
	I0729 20:10:16.828308  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:16.828715  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find current IP address of domain ha-344518-m02 in network mk-ha-344518
	I0729 20:10:16.828745  755599 main.go:141] libmachine: (ha-344518-m02) DBG | I0729 20:10:16.828640  756009 retry.go:31] will retry after 4.18008266s: waiting for machine to come up
	I0729 20:10:21.014071  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.014671  755599 main.go:141] libmachine: (ha-344518-m02) Found IP for machine: 192.168.39.104
	I0729 20:10:21.014702  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has current primary IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.014712  755599 main.go:141] libmachine: (ha-344518-m02) Reserving static IP address...
	I0729 20:10:21.015170  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find host DHCP lease matching {name: "ha-344518-m02", mac: "52:54:00:24:a4:74", ip: "192.168.39.104"} in network mk-ha-344518
	I0729 20:10:21.094510  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Getting to WaitForSSH function...
	I0729 20:10:21.094544  755599 main.go:141] libmachine: (ha-344518-m02) Reserved static IP address: 192.168.39.104
	I0729 20:10:21.094557  755599 main.go:141] libmachine: (ha-344518-m02) Waiting for SSH to be available...
	I0729 20:10:21.097713  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:21.098116  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518
	I0729 20:10:21.098145  755599 main.go:141] libmachine: (ha-344518-m02) DBG | unable to find defined IP address of network mk-ha-344518 interface with MAC address 52:54:00:24:a4:74
	I0729 20:10:21.098311  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH client type: external
	I0729 20:10:21.098345  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa (-rw-------)
	I0729 20:10:21.098398  755599 main.go:141] libmachine: (ha-344518-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:10:21.098414  755599 main.go:141] libmachine: (ha-344518-m02) DBG | About to run SSH command:
	I0729 20:10:21.098428  755599 main.go:141] libmachine: (ha-344518-m02) DBG | exit 0
	I0729 20:10:21.102481  755599 main.go:141] libmachine: (ha-344518-m02) DBG | SSH cmd err, output: exit status 255: 
	I0729 20:10:21.102510  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0729 20:10:21.102520  755599 main.go:141] libmachine: (ha-344518-m02) DBG | command : exit 0
	I0729 20:10:21.102526  755599 main.go:141] libmachine: (ha-344518-m02) DBG | err     : exit status 255
	I0729 20:10:21.102533  755599 main.go:141] libmachine: (ha-344518-m02) DBG | output  : 
	I0729 20:10:24.104783  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Getting to WaitForSSH function...
	I0729 20:10:24.107452  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.109207  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.109238  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.109444  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH client type: external
	I0729 20:10:24.109486  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa (-rw-------)
	I0729 20:10:24.109531  755599 main.go:141] libmachine: (ha-344518-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.104 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:10:24.109544  755599 main.go:141] libmachine: (ha-344518-m02) DBG | About to run SSH command:
	I0729 20:10:24.109554  755599 main.go:141] libmachine: (ha-344518-m02) DBG | exit 0
	I0729 20:10:24.236129  755599 main.go:141] libmachine: (ha-344518-m02) DBG | SSH cmd err, output: <nil>: 
	I0729 20:10:24.236436  755599 main.go:141] libmachine: (ha-344518-m02) KVM machine creation complete!
	I0729 20:10:24.236803  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:24.237362  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:24.237553  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:24.237733  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:10:24.237750  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:10:24.239100  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:10:24.239117  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:10:24.239127  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:10:24.239133  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.241257  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.241549  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.241575  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.241720  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.241890  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.242053  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.242162  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.242305  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.242571  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.242584  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:10:24.347201  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:10:24.347227  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:10:24.347240  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.349886  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.350239  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.350272  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.350403  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.350641  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.350839  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.350978  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.351152  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.351344  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.351357  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:10:24.456711  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:10:24.456786  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:10:24.456792  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:10:24.456803  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.457088  755599 buildroot.go:166] provisioning hostname "ha-344518-m02"
	I0729 20:10:24.457126  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.457361  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.460181  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.460520  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.460548  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.460715  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.460895  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.461030  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.461168  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.461371  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.461529  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.461543  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518-m02 && echo "ha-344518-m02" | sudo tee /etc/hostname
	I0729 20:10:24.577536  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518-m02
	
	I0729 20:10:24.577590  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.580462  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.580900  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.580938  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.581111  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.581325  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.581510  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.581664  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.581841  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:24.582052  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:24.582077  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:10:24.691991  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:10:24.692024  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:10:24.692057  755599 buildroot.go:174] setting up certificates
	I0729 20:10:24.692073  755599 provision.go:84] configureAuth start
	I0729 20:10:24.692085  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetMachineName
	I0729 20:10:24.692410  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:24.695188  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.695571  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.695598  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.695709  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.698369  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.698656  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.698689  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.698891  755599 provision.go:143] copyHostCerts
	I0729 20:10:24.698936  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:10:24.698984  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:10:24.698999  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:10:24.699086  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:10:24.699186  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:10:24.699214  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:10:24.699226  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:10:24.699270  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:10:24.699347  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:10:24.699374  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:10:24.699384  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:10:24.699422  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:10:24.699525  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518-m02 san=[127.0.0.1 192.168.39.104 ha-344518-m02 localhost minikube]
	I0729 20:10:24.871405  755599 provision.go:177] copyRemoteCerts
	I0729 20:10:24.871465  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:10:24.871491  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:24.874120  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.874490  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:24.874518  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:24.874708  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:24.874892  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:24.875026  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:24.875127  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:24.957261  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:10:24.957348  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 20:10:24.979592  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:10:24.979666  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:10:25.003753  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:10:25.003829  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:10:25.026533  755599 provision.go:87] duration metric: took 334.440906ms to configureAuth
	I0729 20:10:25.026563  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:10:25.026768  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:25.026860  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.029681  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.030032  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.030062  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.030231  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.030442  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.030680  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.030845  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.031036  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:25.031231  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:25.031248  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:10:25.287841  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:10:25.287881  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:10:25.287892  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetURL
	I0729 20:10:25.289359  755599 main.go:141] libmachine: (ha-344518-m02) DBG | Using libvirt version 6000000
	I0729 20:10:25.291673  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.291986  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.292006  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.292206  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:10:25.292228  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:10:25.292238  755599 client.go:171] duration metric: took 23.627123397s to LocalClient.Create
	I0729 20:10:25.292268  755599 start.go:167] duration metric: took 23.627239186s to libmachine.API.Create "ha-344518"
	I0729 20:10:25.292280  755599 start.go:293] postStartSetup for "ha-344518-m02" (driver="kvm2")
	I0729 20:10:25.292298  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:10:25.292321  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.292615  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:10:25.292640  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.294790  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.295171  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.295196  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.295456  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.295660  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.295881  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.296078  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.377381  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:10:25.381145  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:10:25.381171  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:10:25.381232  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:10:25.381303  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:10:25.381317  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:10:25.381396  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:10:25.389692  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:10:25.410728  755599 start.go:296] duration metric: took 118.430621ms for postStartSetup
	I0729 20:10:25.410777  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetConfigRaw
	I0729 20:10:25.411419  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:25.414097  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.414403  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.414427  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.414640  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:10:25.414835  755599 start.go:128] duration metric: took 23.769249347s to createHost
	I0729 20:10:25.414860  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.417227  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.417587  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.417614  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.417752  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.417947  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.418109  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.418226  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.418399  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:10:25.418563  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.104 22 <nil> <nil>}
	I0729 20:10:25.418573  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:10:25.524151  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283825.502045174
	
	I0729 20:10:25.524175  755599 fix.go:216] guest clock: 1722283825.502045174
	I0729 20:10:25.524182  755599 fix.go:229] Guest: 2024-07-29 20:10:25.502045174 +0000 UTC Remote: 2024-07-29 20:10:25.41484648 +0000 UTC m=+79.220118978 (delta=87.198694ms)
	I0729 20:10:25.524200  755599 fix.go:200] guest clock delta is within tolerance: 87.198694ms
	I0729 20:10:25.524205  755599 start.go:83] releasing machines lock for "ha-344518-m02", held for 23.878719016s
	I0729 20:10:25.524222  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.524541  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:25.527237  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.527733  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.527764  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.530408  755599 out.go:177] * Found network options:
	I0729 20:10:25.531705  755599 out.go:177]   - NO_PROXY=192.168.39.238
	W0729 20:10:25.533019  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:10:25.533051  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533605  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533811  755599 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:10:25.533872  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:10:25.533923  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	W0729 20:10:25.534007  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:10:25.534071  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:10:25.534087  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:10:25.536706  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.536859  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537171  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.537210  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537244  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:25.537267  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:25.537292  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.537458  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:10:25.537530  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.537676  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.537686  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:10:25.537850  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:10:25.537853  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.538014  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:10:25.766802  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:10:25.773216  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:10:25.773298  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:10:25.788075  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:10:25.788098  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:10:25.788173  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:10:25.803257  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:10:25.815595  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:10:25.815656  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:10:25.827786  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:10:25.839741  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:10:25.947907  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:10:26.097011  755599 docker.go:232] disabling docker service ...
	I0729 20:10:26.097103  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:10:26.112088  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:10:26.123704  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:10:26.259181  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:10:26.384791  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:10:26.398652  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:10:26.415590  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:10:26.415736  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.425383  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:10:26.425459  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.435502  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.445453  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.455330  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:10:26.465058  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.474443  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.490588  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:10:26.500191  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:10:26.509079  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:10:26.509129  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:10:26.521633  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:10:26.530264  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:10:26.643004  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:10:26.771247  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:10:26.771338  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:10:26.775753  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:10:26.775817  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:10:26.779060  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:10:26.817831  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:10:26.817925  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:10:26.844818  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:10:26.872041  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:10:26.873343  755599 out.go:177]   - env NO_PROXY=192.168.39.238
	I0729 20:10:26.874356  755599 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:10:26.877071  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:26.877476  755599 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:10:15 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:10:26.877507  755599 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:10:26.877722  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:10:26.881724  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:10:26.893411  755599 mustload.go:65] Loading cluster: ha-344518
	I0729 20:10:26.893636  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:10:26.893884  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:26.893911  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:26.908995  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43727
	I0729 20:10:26.909477  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:26.909979  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:26.909999  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:26.910377  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:26.910605  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:10:26.912275  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:26.912551  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:26.912586  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:26.927672  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38795
	I0729 20:10:26.928131  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:26.928640  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:26.928674  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:26.928989  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:26.929203  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:26.929375  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.104
	I0729 20:10:26.929393  755599 certs.go:194] generating shared ca certs ...
	I0729 20:10:26.929414  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:26.929568  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:10:26.929624  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:10:26.929638  755599 certs.go:256] generating profile certs ...
	I0729 20:10:26.929723  755599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:10:26.929755  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c
	I0729 20:10:26.929777  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.254]
	I0729 20:10:27.084609  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c ...
	I0729 20:10:27.084645  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c: {Name:mk29d4e2061830b1c1b84d575042ae4e1f4241e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:27.084855  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c ...
	I0729 20:10:27.084881  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c: {Name:mkc6e883c708deef6aeae601dff0685e5bf5a37e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:10:27.084986  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.174f3d4c -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:10:27.085110  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.174f3d4c -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:10:27.085235  755599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:10:27.085252  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:10:27.085265  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:10:27.085275  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:10:27.085284  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:10:27.085293  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:10:27.085303  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:10:27.085317  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:10:27.085329  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:10:27.085380  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:10:27.085408  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:10:27.085418  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:10:27.085437  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:10:27.085461  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:10:27.085482  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:10:27.085519  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:10:27.085550  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.085564  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.085574  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.085607  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:27.088743  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:27.089194  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:27.089221  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:27.089373  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:27.089637  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:27.089875  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:27.090016  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:27.160383  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 20:10:27.165482  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 20:10:27.177116  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 20:10:27.180867  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 20:10:27.190955  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 20:10:27.195465  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 20:10:27.207980  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 20:10:27.212601  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 20:10:27.223762  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 20:10:27.227649  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 20:10:27.239175  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 20:10:27.243062  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 20:10:27.254138  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:10:27.277250  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:10:27.298527  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:10:27.320138  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:10:27.341900  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 20:10:27.363590  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:10:27.384311  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:10:27.406025  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:10:27.427706  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:10:27.449641  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:10:27.470422  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:10:27.491984  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 20:10:27.507715  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 20:10:27.522149  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 20:10:27.536851  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 20:10:27.551846  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 20:10:27.566320  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 20:10:27.581928  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 20:10:27.596720  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:10:27.601908  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:10:27.611390  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.615117  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.615172  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:10:27.620397  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:10:27.629882  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:10:27.639528  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.643992  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.644044  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:10:27.649601  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:10:27.659697  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:10:27.670737  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.674609  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.674661  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:10:27.679622  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:10:27.689228  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:10:27.692969  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:10:27.693030  755599 kubeadm.go:934] updating node {m02 192.168.39.104 8443 v1.30.3 crio true true} ...
	I0729 20:10:27.693142  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.104
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:10:27.693185  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:10:27.693228  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:10:27.709929  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:10:27.710066  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:10:27.710136  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:10:27.719623  755599 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 20:10:27.719672  755599 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 20:10:27.728587  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 20:10:27.728612  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:10:27.728725  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:10:27.728731  755599 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet
	I0729 20:10:27.728748  755599 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm
	I0729 20:10:27.732694  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 20:10:27.732720  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 20:10:51.375168  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:10:51.390438  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:10:51.390532  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:10:51.394517  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 20:10:51.394562  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 20:10:55.678731  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:10:55.678817  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:10:55.683573  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 20:10:55.683612  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 20:10:55.894374  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 20:10:55.903261  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0729 20:10:55.918748  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:10:55.933816  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:10:55.949545  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:10:55.953144  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:10:55.964404  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:10:56.104009  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:10:56.119957  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:10:56.120359  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:10:56.120412  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:10:56.136417  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35451
	I0729 20:10:56.137139  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:10:56.137667  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:10:56.137697  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:10:56.138069  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:10:56.138287  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:10:56.138491  755599 start.go:317] joinCluster: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:10:56.138598  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 20:10:56.138616  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:10:56.141591  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:56.142018  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:10:56.142052  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:10:56.142160  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:10:56.142327  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:10:56.142468  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:10:56.142598  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:10:56.290973  755599 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:10:56.291035  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e7gstn.n8706rcnpqrltanw --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443"
	I0729 20:11:17.128944  755599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token e7gstn.n8706rcnpqrltanw --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m02 --control-plane --apiserver-advertise-address=192.168.39.104 --apiserver-bind-port=8443": (20.837864117s)
	I0729 20:11:17.128995  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 20:11:17.551849  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518-m02 minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=false
	I0729 20:11:17.679581  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344518-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 20:11:17.805324  755599 start.go:319] duration metric: took 21.666815728s to joinCluster
	I0729 20:11:17.805405  755599 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:11:17.805786  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:11:17.806806  755599 out.go:177] * Verifying Kubernetes components...
	I0729 20:11:17.808089  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:11:18.095735  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:11:18.134901  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:11:18.135217  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 20:11:18.135285  755599 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0729 20:11:18.135526  755599 node_ready.go:35] waiting up to 6m0s for node "ha-344518-m02" to be "Ready" ...
	I0729 20:11:18.135670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:18.135679  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:18.135687  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:18.135691  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:18.147378  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:11:18.636249  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:18.636273  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:18.636285  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:18.636291  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:18.646266  755599 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0729 20:11:19.136128  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:19.136150  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:19.136159  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:19.136164  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:19.147756  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:11:19.635948  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:19.635972  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:19.635981  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:19.635984  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:19.640942  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:20.136731  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:20.136761  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:20.136772  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:20.136777  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:20.140265  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:20.140809  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:20.636148  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:20.636180  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:20.636193  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:20.636206  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:20.639303  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:21.136214  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:21.136240  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:21.136251  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:21.136256  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:21.140463  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:21.636460  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:21.636487  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:21.636498  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:21.636507  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:21.641290  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:22.136131  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:22.136161  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:22.136173  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:22.136177  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:22.140134  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:22.141047  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:22.636528  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:22.636555  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:22.636564  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:22.636569  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:22.639898  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:23.135732  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:23.135756  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:23.135765  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:23.135768  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:23.140090  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:23.636392  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:23.636415  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:23.636424  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:23.636429  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:23.640238  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:24.136187  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:24.136217  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:24.136230  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:24.136236  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:24.140483  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:24.141765  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:24.636096  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:24.636123  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:24.636139  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:24.636143  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:24.639749  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:25.136794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:25.136821  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:25.136835  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:25.136840  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:25.139992  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:25.636085  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:25.636114  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:25.636124  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:25.636129  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:25.639968  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:26.136183  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:26.136205  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:26.136214  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:26.136219  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:26.140418  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:26.636014  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:26.636059  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:26.636072  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:26.636077  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:26.638981  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:26.639565  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:27.136721  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:27.136746  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:27.136755  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:27.136758  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:27.139799  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:27.636688  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:27.636713  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:27.636724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:27.636729  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:27.640442  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:28.136515  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:28.136539  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:28.136549  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:28.136554  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:28.139904  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:28.635870  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:28.635896  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:28.635911  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:28.635916  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:28.639967  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:28.640906  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:29.136398  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:29.136425  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:29.136438  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:29.136445  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:29.139879  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:29.635797  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:29.635823  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:29.635832  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:29.635835  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:29.639077  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:30.135917  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:30.135940  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:30.135949  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:30.135954  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:30.139150  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:30.636125  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:30.636149  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:30.636157  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:30.636167  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:30.640183  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:31.136355  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:31.136383  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:31.136393  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:31.136398  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:31.139422  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:31.140057  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:31.636221  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:31.636248  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:31.636259  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:31.636264  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:31.639513  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:32.136239  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:32.136269  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:32.136282  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:32.136287  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:32.139706  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:32.636649  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:32.636675  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:32.636684  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:32.636688  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:32.639427  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:33.135916  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:33.135952  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:33.135974  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:33.135981  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:33.139013  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:33.635978  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:33.636004  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:33.636012  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:33.636015  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:33.639302  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:33.639813  755599 node_ready.go:53] node "ha-344518-m02" has status "Ready":"False"
	I0729 20:11:34.136146  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:34.136170  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:34.136178  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:34.136181  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:34.139054  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:34.635839  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:34.635862  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:34.635872  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:34.635876  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:34.638657  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.136639  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.136661  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.136670  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.136675  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.139916  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:35.636783  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.636809  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.636817  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.636822  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.641351  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.641979  755599 node_ready.go:49] node "ha-344518-m02" has status "Ready":"True"
	I0729 20:11:35.642004  755599 node_ready.go:38] duration metric: took 17.506442147s for node "ha-344518-m02" to be "Ready" ...
	I0729 20:11:35.642021  755599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:11:35.642130  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:35.642142  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.642152  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.642159  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.648575  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.654259  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.654343  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wzmc5
	I0729 20:11:35.654352  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.654359  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.654363  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.657016  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.657591  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.657608  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.657617  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.657623  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.663743  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.664176  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.664194  755599 pod_ready.go:81] duration metric: took 9.912821ms for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.664203  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.664254  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xpkp6
	I0729 20:11:35.664261  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.664268  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.664276  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.666649  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:11:35.667297  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.667317  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.667324  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.667328  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.674714  755599 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 20:11:35.675140  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.675166  755599 pod_ready.go:81] duration metric: took 10.95765ms for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.675175  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.675222  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518
	I0729 20:11:35.675229  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.675235  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.675241  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.681307  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:11:35.681915  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:35.681928  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.681936  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.681940  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.689894  755599 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0729 20:11:35.690323  755599 pod_ready.go:92] pod "etcd-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.690343  755599 pod_ready.go:81] duration metric: took 15.162322ms for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.690353  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.690412  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m02
	I0729 20:11:35.690424  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.690432  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.690436  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.695233  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.695795  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:35.695808  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.695815  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.695819  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.700061  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:35.700537  755599 pod_ready.go:92] pod "etcd-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:35.700553  755599 pod_ready.go:81] duration metric: took 10.194192ms for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.700572  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:35.836896  755599 request.go:629] Waited for 136.251612ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:11:35.836997  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:11:35.837004  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:35.837014  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:35.837021  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:35.840842  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.036850  755599 request.go:629] Waited for 195.30679ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.036925  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.036931  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.036939  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.036943  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.040840  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.041535  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.041555  755599 pod_ready.go:81] duration metric: took 340.975746ms for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.041564  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.237543  755599 request.go:629] Waited for 195.904869ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:11:36.237615  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:11:36.237620  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.237628  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.237631  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.242184  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:36.437369  755599 request.go:629] Waited for 194.358026ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:36.437444  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:36.437453  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.437465  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.437474  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.440851  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.441387  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.441409  755599 pod_ready.go:81] duration metric: took 399.837907ms for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.441419  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.637439  755599 request.go:629] Waited for 195.923012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:11:36.637526  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:11:36.637533  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.637541  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.637546  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.641074  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.837218  755599 request.go:629] Waited for 195.381667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.837280  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:36.837285  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:36.837292  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:36.837297  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:36.840676  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:36.841190  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:36.841209  755599 pod_ready.go:81] duration metric: took 399.783358ms for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:36.841218  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.037339  755599 request.go:629] Waited for 196.004131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:11:37.037424  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:11:37.037433  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.037444  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.037451  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.040956  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.236893  755599 request.go:629] Waited for 195.334849ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:37.236976  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:37.236981  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.236990  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.236994  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.240332  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.240828  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:37.240850  755599 pod_ready.go:81] duration metric: took 399.625522ms for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.240860  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.437122  755599 request.go:629] Waited for 196.165968ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:11:37.437190  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:11:37.437196  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.437204  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.437209  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.440918  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.636903  755599 request.go:629] Waited for 195.291062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:37.636969  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:37.636975  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.636983  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.636987  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.640607  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:37.641309  755599 pod_ready.go:92] pod "kube-proxy-fh6rg" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:37.641340  755599 pod_ready.go:81] duration metric: took 400.472066ms for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.641354  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:37.837231  755599 request.go:629] Waited for 195.789027ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:11:37.837305  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:11:37.837310  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:37.837319  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:37.837330  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:37.841791  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:38.037794  755599 request.go:629] Waited for 195.32069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.037877  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.037884  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.037897  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.037908  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.040965  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.041467  755599 pod_ready.go:92] pod "kube-proxy-nfxp2" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.041490  755599 pod_ready.go:81] duration metric: took 400.124155ms for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.041501  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.237576  755599 request.go:629] Waited for 195.990661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:11:38.237667  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:11:38.237674  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.237684  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.237692  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.241059  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.436895  755599 request.go:629] Waited for 195.307559ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:38.436965  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:11:38.436971  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.436979  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.436983  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.440744  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.441468  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.441489  755599 pod_ready.go:81] duration metric: took 399.982414ms for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.441500  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.637663  755599 request.go:629] Waited for 196.070509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:11:38.637738  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:11:38.637743  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.637751  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.637757  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.641143  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.837180  755599 request.go:629] Waited for 195.409472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.837241  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:11:38.837246  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.837254  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.837260  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.840552  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:38.841040  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:11:38.841059  755599 pod_ready.go:81] duration metric: took 399.552687ms for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:11:38.841071  755599 pod_ready.go:38] duration metric: took 3.199004886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:11:38.841087  755599 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:11:38.841138  755599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:11:38.857306  755599 api_server.go:72] duration metric: took 21.051860743s to wait for apiserver process to appear ...
	I0729 20:11:38.857336  755599 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:11:38.857353  755599 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0729 20:11:38.861608  755599 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0729 20:11:38.861691  755599 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0729 20:11:38.861696  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:38.861707  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:38.861713  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:38.862688  755599 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 20:11:38.862770  755599 api_server.go:141] control plane version: v1.30.3
	I0729 20:11:38.862786  755599 api_server.go:131] duration metric: took 5.444906ms to wait for apiserver health ...
	I0729 20:11:38.862794  755599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:11:39.037210  755599 request.go:629] Waited for 174.346456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.037286  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.037294  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.037303  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.037317  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.042538  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:11:39.046435  755599 system_pods.go:59] 17 kube-system pods found
	I0729 20:11:39.046462  755599 system_pods.go:61] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:11:39.046467  755599 system_pods.go:61] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:11:39.046471  755599 system_pods.go:61] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:11:39.046474  755599 system_pods.go:61] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:11:39.046477  755599 system_pods.go:61] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:11:39.046480  755599 system_pods.go:61] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:11:39.046482  755599 system_pods.go:61] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:11:39.046485  755599 system_pods.go:61] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:11:39.046490  755599 system_pods.go:61] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:11:39.046495  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:11:39.046499  755599 system_pods.go:61] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:11:39.046503  755599 system_pods.go:61] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:11:39.046508  755599 system_pods.go:61] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:11:39.046515  755599 system_pods.go:61] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:11:39.046519  755599 system_pods.go:61] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:11:39.046527  755599 system_pods.go:61] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:11:39.046531  755599 system_pods.go:61] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:11:39.046541  755599 system_pods.go:74] duration metric: took 183.73745ms to wait for pod list to return data ...
	I0729 20:11:39.046552  755599 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:11:39.236913  755599 request.go:629] Waited for 190.266141ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:11:39.236988  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:11:39.236993  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.237000  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.237004  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.240352  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:11:39.240640  755599 default_sa.go:45] found service account: "default"
	I0729 20:11:39.240662  755599 default_sa.go:55] duration metric: took 194.099747ms for default service account to be created ...
	I0729 20:11:39.240676  755599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 20:11:39.436967  755599 request.go:629] Waited for 196.206967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.437065  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:11:39.437073  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.437087  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.437093  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.442716  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:11:39.448741  755599 system_pods.go:86] 17 kube-system pods found
	I0729 20:11:39.448770  755599 system_pods.go:89] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:11:39.448776  755599 system_pods.go:89] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:11:39.448780  755599 system_pods.go:89] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:11:39.448784  755599 system_pods.go:89] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:11:39.448787  755599 system_pods.go:89] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:11:39.448791  755599 system_pods.go:89] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:11:39.448795  755599 system_pods.go:89] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:11:39.448799  755599 system_pods.go:89] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:11:39.448803  755599 system_pods.go:89] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:11:39.448807  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:11:39.448811  755599 system_pods.go:89] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:11:39.448814  755599 system_pods.go:89] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:11:39.448818  755599 system_pods.go:89] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:11:39.448824  755599 system_pods.go:89] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:11:39.448829  755599 system_pods.go:89] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:11:39.448832  755599 system_pods.go:89] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:11:39.448835  755599 system_pods.go:89] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:11:39.448846  755599 system_pods.go:126] duration metric: took 208.165158ms to wait for k8s-apps to be running ...
	I0729 20:11:39.448856  755599 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 20:11:39.448902  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:11:39.463783  755599 system_svc.go:56] duration metric: took 14.917659ms WaitForService to wait for kubelet
	I0729 20:11:39.463816  755599 kubeadm.go:582] duration metric: took 21.658372656s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:11:39.463843  755599 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:11:39.637314  755599 request.go:629] Waited for 173.376861ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0729 20:11:39.637401  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0729 20:11:39.637409  755599 round_trippers.go:469] Request Headers:
	I0729 20:11:39.637424  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:11:39.637429  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:11:39.641524  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:11:39.642312  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:11:39.642367  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:11:39.642380  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:11:39.642385  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:11:39.642390  755599 node_conditions.go:105] duration metric: took 178.541559ms to run NodePressure ...
	I0729 20:11:39.642409  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:11:39.642436  755599 start.go:255] writing updated cluster config ...
	I0729 20:11:39.644658  755599 out.go:177] 
	I0729 20:11:39.646062  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:11:39.646162  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:11:39.647836  755599 out.go:177] * Starting "ha-344518-m03" control-plane node in "ha-344518" cluster
	I0729 20:11:39.649307  755599 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:11:39.649335  755599 cache.go:56] Caching tarball of preloaded images
	I0729 20:11:39.649443  755599 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:11:39.649458  755599 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:11:39.649554  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:11:39.649742  755599 start.go:360] acquireMachinesLock for ha-344518-m03: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:11:39.649796  755599 start.go:364] duration metric: took 31.452µs to acquireMachinesLock for "ha-344518-m03"
	I0729 20:11:39.649821  755599 start.go:93] Provisioning new machine with config: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:11:39.649951  755599 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0729 20:11:39.651593  755599 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:11:39.651686  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:11:39.651721  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:11:39.669410  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45409
	I0729 20:11:39.669889  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:11:39.670566  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:11:39.670591  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:11:39.671030  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:11:39.671229  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:11:39.671458  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:11:39.671646  755599 start.go:159] libmachine.API.Create for "ha-344518" (driver="kvm2")
	I0729 20:11:39.671680  755599 client.go:168] LocalClient.Create starting
	I0729 20:11:39.671719  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:11:39.671780  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:11:39.671804  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:11:39.671867  755599 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:11:39.671898  755599 main.go:141] libmachine: Decoding PEM data...
	I0729 20:11:39.671914  755599 main.go:141] libmachine: Parsing certificate...
	I0729 20:11:39.671948  755599 main.go:141] libmachine: Running pre-create checks...
	I0729 20:11:39.671959  755599 main.go:141] libmachine: (ha-344518-m03) Calling .PreCreateCheck
	I0729 20:11:39.672165  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:11:39.672560  755599 main.go:141] libmachine: Creating machine...
	I0729 20:11:39.672575  755599 main.go:141] libmachine: (ha-344518-m03) Calling .Create
	I0729 20:11:39.672744  755599 main.go:141] libmachine: (ha-344518-m03) Creating KVM machine...
	I0729 20:11:39.673982  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found existing default KVM network
	I0729 20:11:39.674123  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found existing private KVM network mk-ha-344518
	I0729 20:11:39.674349  755599 main.go:141] libmachine: (ha-344518-m03) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 ...
	I0729 20:11:39.674385  755599 main.go:141] libmachine: (ha-344518-m03) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:11:39.674468  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:39.674363  756503 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:11:39.674586  755599 main.go:141] libmachine: (ha-344518-m03) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:11:39.952405  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:39.952249  756503 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa...
	I0729 20:11:40.015841  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:40.015702  756503 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/ha-344518-m03.rawdisk...
	I0729 20:11:40.015883  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Writing magic tar header
	I0729 20:11:40.015901  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Writing SSH key tar header
	I0729 20:11:40.015914  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:40.015819  756503 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 ...
	I0729 20:11:40.015980  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03
	I0729 20:11:40.016020  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03 (perms=drwx------)
	I0729 20:11:40.016053  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:11:40.016069  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:11:40.016090  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:11:40.016102  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:11:40.016115  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:11:40.016131  755599 main.go:141] libmachine: (ha-344518-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:11:40.016144  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:11:40.016155  755599 main.go:141] libmachine: (ha-344518-m03) Creating domain...
	I0729 20:11:40.016175  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:11:40.016193  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:11:40.016205  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:11:40.016215  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Checking permissions on dir: /home
	I0729 20:11:40.016225  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Skipping /home - not owner
	I0729 20:11:40.017000  755599 main.go:141] libmachine: (ha-344518-m03) define libvirt domain using xml: 
	I0729 20:11:40.017019  755599 main.go:141] libmachine: (ha-344518-m03) <domain type='kvm'>
	I0729 20:11:40.017030  755599 main.go:141] libmachine: (ha-344518-m03)   <name>ha-344518-m03</name>
	I0729 20:11:40.017038  755599 main.go:141] libmachine: (ha-344518-m03)   <memory unit='MiB'>2200</memory>
	I0729 20:11:40.017048  755599 main.go:141] libmachine: (ha-344518-m03)   <vcpu>2</vcpu>
	I0729 20:11:40.017064  755599 main.go:141] libmachine: (ha-344518-m03)   <features>
	I0729 20:11:40.017077  755599 main.go:141] libmachine: (ha-344518-m03)     <acpi/>
	I0729 20:11:40.017087  755599 main.go:141] libmachine: (ha-344518-m03)     <apic/>
	I0729 20:11:40.017100  755599 main.go:141] libmachine: (ha-344518-m03)     <pae/>
	I0729 20:11:40.017111  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017122  755599 main.go:141] libmachine: (ha-344518-m03)   </features>
	I0729 20:11:40.017134  755599 main.go:141] libmachine: (ha-344518-m03)   <cpu mode='host-passthrough'>
	I0729 20:11:40.017155  755599 main.go:141] libmachine: (ha-344518-m03)   
	I0729 20:11:40.017172  755599 main.go:141] libmachine: (ha-344518-m03)   </cpu>
	I0729 20:11:40.017202  755599 main.go:141] libmachine: (ha-344518-m03)   <os>
	I0729 20:11:40.017226  755599 main.go:141] libmachine: (ha-344518-m03)     <type>hvm</type>
	I0729 20:11:40.017236  755599 main.go:141] libmachine: (ha-344518-m03)     <boot dev='cdrom'/>
	I0729 20:11:40.017251  755599 main.go:141] libmachine: (ha-344518-m03)     <boot dev='hd'/>
	I0729 20:11:40.017261  755599 main.go:141] libmachine: (ha-344518-m03)     <bootmenu enable='no'/>
	I0729 20:11:40.017271  755599 main.go:141] libmachine: (ha-344518-m03)   </os>
	I0729 20:11:40.017300  755599 main.go:141] libmachine: (ha-344518-m03)   <devices>
	I0729 20:11:40.017312  755599 main.go:141] libmachine: (ha-344518-m03)     <disk type='file' device='cdrom'>
	I0729 20:11:40.017325  755599 main.go:141] libmachine: (ha-344518-m03)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/boot2docker.iso'/>
	I0729 20:11:40.017340  755599 main.go:141] libmachine: (ha-344518-m03)       <target dev='hdc' bus='scsi'/>
	I0729 20:11:40.017351  755599 main.go:141] libmachine: (ha-344518-m03)       <readonly/>
	I0729 20:11:40.017362  755599 main.go:141] libmachine: (ha-344518-m03)     </disk>
	I0729 20:11:40.017373  755599 main.go:141] libmachine: (ha-344518-m03)     <disk type='file' device='disk'>
	I0729 20:11:40.017385  755599 main.go:141] libmachine: (ha-344518-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:11:40.017400  755599 main.go:141] libmachine: (ha-344518-m03)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/ha-344518-m03.rawdisk'/>
	I0729 20:11:40.017411  755599 main.go:141] libmachine: (ha-344518-m03)       <target dev='hda' bus='virtio'/>
	I0729 20:11:40.017426  755599 main.go:141] libmachine: (ha-344518-m03)     </disk>
	I0729 20:11:40.017446  755599 main.go:141] libmachine: (ha-344518-m03)     <interface type='network'>
	I0729 20:11:40.017459  755599 main.go:141] libmachine: (ha-344518-m03)       <source network='mk-ha-344518'/>
	I0729 20:11:40.017472  755599 main.go:141] libmachine: (ha-344518-m03)       <model type='virtio'/>
	I0729 20:11:40.017483  755599 main.go:141] libmachine: (ha-344518-m03)     </interface>
	I0729 20:11:40.017496  755599 main.go:141] libmachine: (ha-344518-m03)     <interface type='network'>
	I0729 20:11:40.017509  755599 main.go:141] libmachine: (ha-344518-m03)       <source network='default'/>
	I0729 20:11:40.017526  755599 main.go:141] libmachine: (ha-344518-m03)       <model type='virtio'/>
	I0729 20:11:40.017538  755599 main.go:141] libmachine: (ha-344518-m03)     </interface>
	I0729 20:11:40.017557  755599 main.go:141] libmachine: (ha-344518-m03)     <serial type='pty'>
	I0729 20:11:40.017576  755599 main.go:141] libmachine: (ha-344518-m03)       <target port='0'/>
	I0729 20:11:40.017587  755599 main.go:141] libmachine: (ha-344518-m03)     </serial>
	I0729 20:11:40.017595  755599 main.go:141] libmachine: (ha-344518-m03)     <console type='pty'>
	I0729 20:11:40.017607  755599 main.go:141] libmachine: (ha-344518-m03)       <target type='serial' port='0'/>
	I0729 20:11:40.017619  755599 main.go:141] libmachine: (ha-344518-m03)     </console>
	I0729 20:11:40.017633  755599 main.go:141] libmachine: (ha-344518-m03)     <rng model='virtio'>
	I0729 20:11:40.017647  755599 main.go:141] libmachine: (ha-344518-m03)       <backend model='random'>/dev/random</backend>
	I0729 20:11:40.017656  755599 main.go:141] libmachine: (ha-344518-m03)     </rng>
	I0729 20:11:40.017676  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017693  755599 main.go:141] libmachine: (ha-344518-m03)     
	I0729 20:11:40.017707  755599 main.go:141] libmachine: (ha-344518-m03)   </devices>
	I0729 20:11:40.017715  755599 main.go:141] libmachine: (ha-344518-m03) </domain>
	I0729 20:11:40.017728  755599 main.go:141] libmachine: (ha-344518-m03) 
	I0729 20:11:40.024354  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:c5:c3:3e in network default
	I0729 20:11:40.024921  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring networks are active...
	I0729 20:11:40.024940  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:40.025593  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring network default is active
	I0729 20:11:40.025843  755599 main.go:141] libmachine: (ha-344518-m03) Ensuring network mk-ha-344518 is active
	I0729 20:11:40.026177  755599 main.go:141] libmachine: (ha-344518-m03) Getting domain xml...
	I0729 20:11:40.026814  755599 main.go:141] libmachine: (ha-344518-m03) Creating domain...
	I0729 20:11:41.266986  755599 main.go:141] libmachine: (ha-344518-m03) Waiting to get IP...
	I0729 20:11:41.267910  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.268388  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.268414  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.268321  756503 retry.go:31] will retry after 277.943575ms: waiting for machine to come up
	I0729 20:11:41.547760  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.548259  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.548291  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.548197  756503 retry.go:31] will retry after 314.191405ms: waiting for machine to come up
	I0729 20:11:41.863651  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:41.864119  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:41.864144  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:41.864073  756503 retry.go:31] will retry after 457.969852ms: waiting for machine to come up
	I0729 20:11:42.323737  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:42.324117  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:42.324143  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:42.324075  756503 retry.go:31] will retry after 497.585545ms: waiting for machine to come up
	I0729 20:11:42.823826  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:42.824310  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:42.824350  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:42.824264  756503 retry.go:31] will retry after 721.983704ms: waiting for machine to come up
	I0729 20:11:43.548162  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:43.548608  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:43.548638  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:43.548553  756503 retry.go:31] will retry after 646.831228ms: waiting for machine to come up
	I0729 20:11:44.197556  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:44.198085  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:44.198115  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:44.198015  756503 retry.go:31] will retry after 924.878532ms: waiting for machine to come up
	I0729 20:11:45.124713  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:45.125264  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:45.125305  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:45.125223  756503 retry.go:31] will retry after 1.391829943s: waiting for machine to come up
	I0729 20:11:46.518870  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:46.519370  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:46.519400  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:46.519312  756503 retry.go:31] will retry after 1.668556944s: waiting for machine to come up
	I0729 20:11:48.189217  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:48.189778  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:48.189805  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:48.189728  756503 retry.go:31] will retry after 1.865775967s: waiting for machine to come up
	I0729 20:11:50.057284  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:50.057789  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:50.057808  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:50.057754  756503 retry.go:31] will retry after 2.228840474s: waiting for machine to come up
	I0729 20:11:52.289080  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:52.289596  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:52.289622  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:52.289519  756503 retry.go:31] will retry after 3.476158421s: waiting for machine to come up
	I0729 20:11:55.767656  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:55.768243  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find current IP address of domain ha-344518-m03 in network mk-ha-344518
	I0729 20:11:55.768268  755599 main.go:141] libmachine: (ha-344518-m03) DBG | I0729 20:11:55.768197  756503 retry.go:31] will retry after 4.067263279s: waiting for machine to come up
	I0729 20:11:59.836951  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.837480  755599 main.go:141] libmachine: (ha-344518-m03) Found IP for machine: 192.168.39.53
	I0729 20:11:59.837505  755599 main.go:141] libmachine: (ha-344518-m03) Reserving static IP address...
	I0729 20:11:59.837518  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has current primary IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.837983  755599 main.go:141] libmachine: (ha-344518-m03) DBG | unable to find host DHCP lease matching {name: "ha-344518-m03", mac: "52:54:00:36:90:07", ip: "192.168.39.53"} in network mk-ha-344518
	I0729 20:11:59.915114  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Getting to WaitForSSH function...
	I0729 20:11:59.915149  755599 main.go:141] libmachine: (ha-344518-m03) Reserved static IP address: 192.168.39.53
	I0729 20:11:59.915185  755599 main.go:141] libmachine: (ha-344518-m03) Waiting for SSH to be available...
	I0729 20:11:59.917944  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.918593  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:minikube Clientid:01:52:54:00:36:90:07}
	I0729 20:11:59.918627  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:11:59.918811  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using SSH client type: external
	I0729 20:11:59.918842  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa (-rw-------)
	I0729 20:11:59.918875  755599 main.go:141] libmachine: (ha-344518-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:11:59.918890  755599 main.go:141] libmachine: (ha-344518-m03) DBG | About to run SSH command:
	I0729 20:11:59.918906  755599 main.go:141] libmachine: (ha-344518-m03) DBG | exit 0
	I0729 20:12:00.044086  755599 main.go:141] libmachine: (ha-344518-m03) DBG | SSH cmd err, output: <nil>: 
	I0729 20:12:00.044430  755599 main.go:141] libmachine: (ha-344518-m03) KVM machine creation complete!
	I0729 20:12:00.044763  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:12:00.045479  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:00.045692  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:00.045866  755599 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:12:00.045881  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:12:00.047074  755599 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:12:00.047089  755599 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:12:00.047099  755599 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:12:00.047106  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.049675  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.050048  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.050074  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.050234  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.050423  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.050592  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.050738  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.050936  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.051156  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.051168  755599 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:12:00.155934  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:12:00.155961  755599 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:12:00.155971  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.159018  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.159465  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.159494  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.159639  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.159887  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.160084  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.160213  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.160432  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.160592  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.160602  755599 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:12:00.268562  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:12:00.268629  755599 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:12:00.268640  755599 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:12:00.268651  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.268970  755599 buildroot.go:166] provisioning hostname "ha-344518-m03"
	I0729 20:12:00.269003  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.269244  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.272477  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.272897  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.272921  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.273217  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.273467  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.273665  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.273856  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.274079  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.274259  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.274271  755599 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518-m03 && echo "ha-344518-m03" | sudo tee /etc/hostname
	I0729 20:12:00.395035  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518-m03
	
	I0729 20:12:00.395069  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.398127  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.398591  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.398617  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.398864  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:00.399074  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.399244  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:00.399446  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:00.399699  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:00.399930  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:00.399954  755599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:12:00.517438  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:12:00.517476  755599 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:12:00.517500  755599 buildroot.go:174] setting up certificates
	I0729 20:12:00.517516  755599 provision.go:84] configureAuth start
	I0729 20:12:00.517529  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetMachineName
	I0729 20:12:00.517880  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:00.520617  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.521007  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.521038  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.521317  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:00.523530  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.523932  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:00.523960  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:00.524138  755599 provision.go:143] copyHostCerts
	I0729 20:12:00.524171  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:12:00.524202  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:12:00.524212  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:12:00.524280  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:12:00.524375  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:12:00.524393  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:12:00.524399  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:12:00.524424  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:12:00.524479  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:12:00.524495  755599 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:12:00.524501  755599 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:12:00.524522  755599 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:12:00.524580  755599 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518-m03 san=[127.0.0.1 192.168.39.53 ha-344518-m03 localhost minikube]
	I0729 20:12:01.019516  755599 provision.go:177] copyRemoteCerts
	I0729 20:12:01.019584  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:12:01.019617  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.022183  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.022497  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.022533  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.022753  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.022952  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.023130  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.023424  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.106028  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:12:01.106116  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:12:01.130953  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:12:01.131023  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 20:12:01.153630  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:12:01.153713  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:12:01.176800  755599 provision.go:87] duration metric: took 659.267754ms to configureAuth
	I0729 20:12:01.176831  755599 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:12:01.177108  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:01.177212  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.180151  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.180649  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.180679  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.180828  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.181075  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.181365  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.181529  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.181711  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:01.181871  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:01.181884  755599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:12:01.454007  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:12:01.454049  755599 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:12:01.454062  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetURL
	I0729 20:12:01.455471  755599 main.go:141] libmachine: (ha-344518-m03) DBG | Using libvirt version 6000000
	I0729 20:12:01.457700  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.458171  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.458204  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.458384  755599 main.go:141] libmachine: Docker is up and running!
	I0729 20:12:01.458404  755599 main.go:141] libmachine: Reticulating splines...
	I0729 20:12:01.458413  755599 client.go:171] duration metric: took 21.786723495s to LocalClient.Create
	I0729 20:12:01.458439  755599 start.go:167] duration metric: took 21.786794984s to libmachine.API.Create "ha-344518"
	I0729 20:12:01.458449  755599 start.go:293] postStartSetup for "ha-344518-m03" (driver="kvm2")
	I0729 20:12:01.458462  755599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:12:01.458491  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.458745  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:12:01.458774  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.460765  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.461118  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.461148  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.461270  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.461497  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.461665  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.461827  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.548457  755599 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:12:01.552563  755599 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:12:01.552589  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:12:01.552668  755599 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:12:01.552739  755599 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:12:01.552748  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:12:01.552826  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:12:01.561243  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:12:01.584677  755599 start.go:296] duration metric: took 126.208067ms for postStartSetup
	I0729 20:12:01.584759  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetConfigRaw
	I0729 20:12:01.585413  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:01.588230  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.588553  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.588582  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.588897  755599 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:12:01.589171  755599 start.go:128] duration metric: took 21.939207595s to createHost
	I0729 20:12:01.589204  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.592831  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.593351  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.593378  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.593457  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.593662  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.593842  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.593979  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.594149  755599 main.go:141] libmachine: Using SSH client type: native
	I0729 20:12:01.594313  755599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0729 20:12:01.594325  755599 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:12:01.700211  755599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722283921.676007561
	
	I0729 20:12:01.700232  755599 fix.go:216] guest clock: 1722283921.676007561
	I0729 20:12:01.700239  755599 fix.go:229] Guest: 2024-07-29 20:12:01.676007561 +0000 UTC Remote: 2024-07-29 20:12:01.589189696 +0000 UTC m=+175.394462204 (delta=86.817865ms)
	I0729 20:12:01.700255  755599 fix.go:200] guest clock delta is within tolerance: 86.817865ms
	I0729 20:12:01.700260  755599 start.go:83] releasing machines lock for "ha-344518-m03", held for 22.050452874s
	I0729 20:12:01.700277  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.700532  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:01.703365  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.703765  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.703796  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.706380  755599 out.go:177] * Found network options:
	I0729 20:12:01.707962  755599 out.go:177]   - NO_PROXY=192.168.39.238,192.168.39.104
	W0729 20:12:01.709275  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 20:12:01.709309  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:12:01.709323  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.709896  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.710112  755599 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:12:01.710217  755599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:12:01.710262  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	W0729 20:12:01.710329  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	W0729 20:12:01.710353  755599 proxy.go:119] fail to check proxy env: Error ip not in block
	I0729 20:12:01.710423  755599 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:12:01.710441  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:12:01.713282  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713474  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713724  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.713752  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.713913  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:01.713917  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.713938  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:01.714114  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:12:01.714125  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.714319  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.714344  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:12:01.714499  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:12:01.714491  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.714666  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:12:01.944094  755599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:12:01.950694  755599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:12:01.950769  755599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:12:01.967016  755599 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:12:01.967044  755599 start.go:495] detecting cgroup driver to use...
	I0729 20:12:01.967110  755599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:12:01.982528  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:12:01.995708  755599 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:12:01.995780  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:12:02.009084  755599 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:12:02.023369  755599 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:12:02.128484  755599 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:12:02.283662  755599 docker.go:232] disabling docker service ...
	I0729 20:12:02.283750  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:12:02.297503  755599 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:12:02.309551  755599 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:12:02.426139  755599 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:12:02.556583  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:12:02.570797  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:12:02.589222  755599 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:12:02.589290  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.599755  755599 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:12:02.599838  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.610345  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.620910  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.631487  755599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:12:02.642693  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.653556  755599 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.669084  755599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:12:02.679725  755599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:12:02.688942  755599 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:12:02.689008  755599 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:12:02.701106  755599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:12:02.710079  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:02.830153  755599 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:12:02.953671  755599 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:12:02.953750  755599 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:12:02.958085  755599 start.go:563] Will wait 60s for crictl version
	I0729 20:12:02.958158  755599 ssh_runner.go:195] Run: which crictl
	I0729 20:12:02.961886  755599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:12:02.998893  755599 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:12:02.998990  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:12:03.026129  755599 ssh_runner.go:195] Run: crio --version
	I0729 20:12:03.055276  755599 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:12:03.056720  755599 out.go:177]   - env NO_PROXY=192.168.39.238
	I0729 20:12:03.057990  755599 out.go:177]   - env NO_PROXY=192.168.39.238,192.168.39.104
	I0729 20:12:03.059160  755599 main.go:141] libmachine: (ha-344518-m03) Calling .GetIP
	I0729 20:12:03.062225  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:03.062566  755599 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:12:03.062598  755599 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:12:03.062814  755599 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:12:03.066779  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:12:03.078768  755599 mustload.go:65] Loading cluster: ha-344518
	I0729 20:12:03.079042  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:03.079301  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:03.079345  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:03.094938  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39201
	I0729 20:12:03.095433  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:03.095903  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:03.095925  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:03.096273  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:03.096497  755599 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:12:03.098337  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:12:03.098699  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:03.098748  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:03.114982  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46295
	I0729 20:12:03.115491  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:03.115971  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:03.115994  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:03.116337  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:03.116537  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:12:03.116690  755599 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.53
	I0729 20:12:03.116702  755599 certs.go:194] generating shared ca certs ...
	I0729 20:12:03.116721  755599 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.116856  755599 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:12:03.116897  755599 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:12:03.116906  755599 certs.go:256] generating profile certs ...
	I0729 20:12:03.116979  755599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:12:03.117008  755599 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35
	I0729 20:12:03.117030  755599 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.53 192.168.39.254]
	I0729 20:12:03.311360  755599 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 ...
	I0729 20:12:03.311397  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35: {Name:mk1a78a099fd3736182aaf0edfadec7a0e984458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.311617  755599 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35 ...
	I0729 20:12:03.311644  755599 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35: {Name:mk5b422f05c9b8fee6cce59eb83e918019dbaa81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:12:03.311767  755599 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.cdf4bc35 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:12:03.311904  755599 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.cdf4bc35 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:12:03.312054  755599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:12:03.312075  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:12:03.312094  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:12:03.312110  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:12:03.312122  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:12:03.312135  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:12:03.312147  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:12:03.312160  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:12:03.312173  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:12:03.312231  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:12:03.312263  755599 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:12:03.312272  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:12:03.312303  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:12:03.312326  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:12:03.312348  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:12:03.312387  755599 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:12:03.312410  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.312422  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.312438  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.312474  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:12:03.316389  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:03.316879  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:12:03.316911  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:03.317134  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:12:03.317372  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:12:03.317550  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:12:03.317668  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:12:03.392377  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0729 20:12:03.398150  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0729 20:12:03.409180  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0729 20:12:03.413351  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0729 20:12:03.424798  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0729 20:12:03.429138  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0729 20:12:03.442321  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0729 20:12:03.446918  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0729 20:12:03.458005  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0729 20:12:03.462607  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0729 20:12:03.472735  755599 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0729 20:12:03.477376  755599 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0729 20:12:03.488496  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:12:03.513287  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:12:03.536285  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:12:03.558939  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:12:03.583477  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0729 20:12:03.606716  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:12:03.628873  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:12:03.652531  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:12:03.675313  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:12:03.698283  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:12:03.720354  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:12:03.742418  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0729 20:12:03.758844  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0729 20:12:03.774962  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0729 20:12:03.789867  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0729 20:12:03.805056  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0729 20:12:03.820390  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0729 20:12:03.835756  755599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0729 20:12:03.853933  755599 ssh_runner.go:195] Run: openssl version
	I0729 20:12:03.859717  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:12:03.870979  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.875043  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.875111  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:12:03.880494  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:12:03.891853  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:12:03.902810  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.906894  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.906943  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:12:03.912187  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:12:03.923880  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:12:03.934179  755599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.938580  755599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.938645  755599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:12:03.943899  755599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:12:03.954088  755599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:12:03.957949  755599 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:12:03.958016  755599 kubeadm.go:934] updating node {m03 192.168.39.53 8443 v1.30.3 crio true true} ...
	I0729 20:12:03.958134  755599 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:12:03.958167  755599 kube-vip.go:115] generating kube-vip config ...
	I0729 20:12:03.958202  755599 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:12:03.972405  755599 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:12:03.972485  755599 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:12:03.972576  755599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:12:03.982246  755599 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0729 20:12:03.982305  755599 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0729 20:12:03.990936  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0729 20:12:03.990949  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0729 20:12:03.990967  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:12:03.990974  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:12:03.990949  755599 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0729 20:12:03.991042  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:12:03.991066  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0729 20:12:03.991160  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0729 20:12:04.008566  755599 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:12:04.008583  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0729 20:12:04.008614  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0729 20:12:04.008675  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0729 20:12:04.008682  755599 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0729 20:12:04.008712  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0729 20:12:04.029548  755599 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0729 20:12:04.029585  755599 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0729 20:12:04.841331  755599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0729 20:12:04.850847  755599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0729 20:12:04.866796  755599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:12:04.882035  755599 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:12:04.897931  755599 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:12:04.901677  755599 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:12:04.912673  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:05.027888  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:12:05.044310  755599 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:12:05.044843  755599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:12:05.044904  755599 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:12:05.061266  755599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42771
	I0729 20:12:05.061837  755599 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:12:05.062673  755599 main.go:141] libmachine: Using API Version  1
	I0729 20:12:05.062788  755599 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:12:05.063225  755599 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:12:05.064352  755599 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:12:05.064806  755599 start.go:317] joinCluster: &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cluster
Name:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:12:05.064955  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0729 20:12:05.064977  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:12:05.067982  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:05.068438  755599 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:12:05.068466  755599 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:12:05.068626  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:12:05.068827  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:12:05.068968  755599 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:12:05.069120  755599 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:12:05.239152  755599 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:12:05.239229  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6gvqoy.bmocsw69jkjfmihd --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443"
	I0729 20:12:28.178733  755599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 6gvqoy.bmocsw69jkjfmihd --discovery-token-ca-cert-hash sha256:6ca3a9d55ee61a543466ff10da1967c1b50ddc5ed0f369803448ea7dd15a35e4 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-344518-m03 --control-plane --apiserver-advertise-address=192.168.39.53 --apiserver-bind-port=8443": (22.939473724s)
	I0729 20:12:28.178774  755599 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0729 20:12:28.627642  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-344518-m03 minikube.k8s.io/updated_at=2024_07_29T20_12_28_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a minikube.k8s.io/name=ha-344518 minikube.k8s.io/primary=false
	I0729 20:12:28.753291  755599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-344518-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0729 20:12:28.849098  755599 start.go:319] duration metric: took 23.784285616s to joinCluster
	I0729 20:12:28.849339  755599 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:12:28.849701  755599 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:12:28.851134  755599 out.go:177] * Verifying Kubernetes components...
	I0729 20:12:28.852378  755599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:12:29.109238  755599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:12:29.193132  755599 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:12:29.193507  755599 kapi.go:59] client config for ha-344518: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0729 20:12:29.193605  755599 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.238:8443
	I0729 20:12:29.193887  755599 node_ready.go:35] waiting up to 6m0s for node "ha-344518-m03" to be "Ready" ...
	I0729 20:12:29.194004  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:29.194015  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:29.194028  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:29.194036  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:29.198929  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:12:29.694081  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:29.694110  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:29.694123  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:29.694131  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:29.696969  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:30.195083  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:30.195105  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:30.195117  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:30.195122  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:30.198251  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:30.694221  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:30.694252  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:30.694264  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:30.694271  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:30.776976  755599 round_trippers.go:574] Response Status: 200 OK in 82 milliseconds
	I0729 20:12:31.194384  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:31.194412  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:31.194424  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:31.194432  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:31.197437  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:31.197961  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:31.694342  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:31.694368  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:31.694377  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:31.694382  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:31.697493  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:32.194300  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:32.194330  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:32.194341  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:32.194348  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:32.197995  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:32.694861  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:32.694888  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:32.694900  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:32.694905  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:32.698277  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:33.195075  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:33.195103  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:33.195113  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:33.195118  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:33.198320  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:33.198991  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:33.694254  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:33.694293  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:33.694303  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:33.694307  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:33.697710  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:34.194794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:34.194827  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:34.194838  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:34.194842  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:34.198051  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:34.694460  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:34.694486  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:34.694499  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:34.694505  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:34.697707  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:35.195117  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:35.195143  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:35.195164  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:35.195171  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:35.198488  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:35.199067  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:35.694359  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:35.694388  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:35.694400  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:35.694404  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:35.697225  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:36.194764  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:36.194786  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:36.194795  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:36.194799  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:36.198201  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:36.694395  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:36.694417  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:36.694425  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:36.694431  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:36.705811  755599 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0729 20:12:37.194827  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:37.194848  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:37.194857  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:37.194861  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:37.198311  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:37.694380  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:37.694403  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:37.694413  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:37.694416  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:37.697109  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:37.697646  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:38.194968  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:38.194992  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:38.195001  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:38.195005  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:38.198024  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:38.695188  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:38.695218  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:38.695229  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:38.695233  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:38.698390  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.195113  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:39.195136  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:39.195145  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:39.195156  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:39.199022  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.694388  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:39.694410  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:39.694419  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:39.694424  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:39.697664  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:39.698221  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:40.195074  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:40.195100  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:40.195112  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:40.195117  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:40.198365  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:40.694245  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:40.694291  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:40.694304  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:40.694310  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:40.697965  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.194682  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:41.194708  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:41.194719  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:41.194723  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:41.197877  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.694829  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:41.694853  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:41.694865  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:41.694870  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:41.698082  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:41.698573  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:42.195165  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:42.195194  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:42.195207  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:42.195214  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:42.198967  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:42.695014  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:42.695038  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:42.695047  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:42.695051  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:42.698089  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.194893  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:43.194918  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:43.194931  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:43.194939  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:43.198054  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.695187  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:43.695217  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:43.695230  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:43.695235  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:43.698780  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:43.699330  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:44.194691  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:44.194715  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:44.194724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:44.194728  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:44.198400  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:44.694955  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:44.694981  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:44.694994  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:44.694998  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:44.698241  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:45.194448  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:45.194472  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:45.194481  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:45.194485  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:45.197719  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:45.694173  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:45.694197  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:45.694206  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:45.694212  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:45.697817  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.194939  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:46.194962  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.194972  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.194979  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.198301  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.198876  755599 node_ready.go:53] node "ha-344518-m03" has status "Ready":"False"
	I0729 20:12:46.694223  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:46.694243  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.694254  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.694259  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.698100  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:46.698587  755599 node_ready.go:49] node "ha-344518-m03" has status "Ready":"True"
	I0729 20:12:46.698607  755599 node_ready.go:38] duration metric: took 17.504700526s for node "ha-344518-m03" to be "Ready" ...
	I0729 20:12:46.698616  755599 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:12:46.698692  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:46.698703  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.698714  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.698724  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.707436  755599 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0729 20:12:46.713350  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.713431  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-wzmc5
	I0729 20:12:46.713436  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.713443  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.713449  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.716071  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.716794  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.716812  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.716819  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.716824  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.719004  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.719429  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.719447  755599 pod_ready.go:81] duration metric: took 6.075087ms for pod "coredns-7db6d8ff4d-wzmc5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.719455  755599 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.719499  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-xpkp6
	I0729 20:12:46.719507  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.719513  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.719518  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.722094  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.722639  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.722653  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.722662  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.722668  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.725126  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.725846  755599 pod_ready.go:92] pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.725871  755599 pod_ready.go:81] duration metric: took 6.410229ms for pod "coredns-7db6d8ff4d-xpkp6" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.725879  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.725948  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518
	I0729 20:12:46.725959  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.725967  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.725970  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.728666  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.729395  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:46.729406  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.729414  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.729417  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.731496  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.731969  755599 pod_ready.go:92] pod "etcd-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.731987  755599 pod_ready.go:81] duration metric: took 6.102181ms for pod "etcd-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.731996  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.732071  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m02
	I0729 20:12:46.732080  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.732087  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.732091  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.734223  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.734764  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:46.734781  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.734791  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.734798  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.737552  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:46.738176  755599 pod_ready.go:92] pod "etcd-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:46.738196  755599 pod_ready.go:81] duration metric: took 6.193814ms for pod "etcd-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.738206  755599 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:46.894576  755599 request.go:629] Waited for 156.307895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m03
	I0729 20:12:46.894653  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/etcd-ha-344518-m03
	I0729 20:12:46.894659  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:46.894666  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:46.894673  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:46.898073  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.094541  755599 request.go:629] Waited for 195.902641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:47.094615  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:47.094623  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.094635  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.094645  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.097349  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:47.097912  755599 pod_ready.go:92] pod "etcd-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.097934  755599 pod_ready.go:81] duration metric: took 359.721312ms for pod "etcd-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.097954  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.295018  755599 request.go:629] Waited for 196.989648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:12:47.295078  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518
	I0729 20:12:47.295084  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.295091  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.295096  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.298841  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.495140  755599 request.go:629] Waited for 195.383242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:47.495272  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:47.495283  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.495294  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.495301  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.498928  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.499412  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.499432  755599 pod_ready.go:81] duration metric: took 401.471192ms for pod "kube-apiserver-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.499443  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.694469  755599 request.go:629] Waited for 194.955371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:12:47.694572  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m02
	I0729 20:12:47.694582  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.694593  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.694602  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.698239  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.894416  755599 request.go:629] Waited for 195.286523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:47.894487  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:47.894493  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:47.894501  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:47.894505  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:47.898022  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:47.898687  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:47.898709  755599 pod_ready.go:81] duration metric: took 399.260118ms for pod "kube-apiserver-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:47.898722  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.094707  755599 request.go:629] Waited for 195.891774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m03
	I0729 20:12:48.094772  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-344518-m03
	I0729 20:12:48.094778  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.094786  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.094789  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.097772  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:48.295159  755599 request.go:629] Waited for 196.548603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:48.295223  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:48.295229  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.295236  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.295241  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.298595  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.299195  755599 pod_ready.go:92] pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:48.299221  755599 pod_ready.go:81] duration metric: took 400.493245ms for pod "kube-apiserver-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.299232  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.494447  755599 request.go:629] Waited for 195.021974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:12:48.494546  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518
	I0729 20:12:48.494558  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.494572  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.494589  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.497955  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.694851  755599 request.go:629] Waited for 196.266047ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:48.694925  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:48.694932  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.694943  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.694951  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.698281  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:48.699030  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:48.699052  755599 pod_ready.go:81] duration metric: took 399.812722ms for pod "kube-controller-manager-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.699066  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:48.895071  755599 request.go:629] Waited for 195.895187ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:12:48.895134  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m02
	I0729 20:12:48.895139  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:48.895157  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:48.895167  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:48.898558  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.094513  755599 request.go:629] Waited for 195.267376ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:49.094601  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:49.094609  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.094620  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.094629  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.100269  755599 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0729 20:12:49.100756  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.100778  755599 pod_ready.go:81] duration metric: took 401.703428ms for pod "kube-controller-manager-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.100791  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.294926  755599 request.go:629] Waited for 194.024383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m03
	I0729 20:12:49.294991  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518-m03
	I0729 20:12:49.294997  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.295005  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.295011  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.298168  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.495264  755599 request.go:629] Waited for 196.358066ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:49.495331  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:49.495337  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.495347  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.495355  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.498359  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:49.498925  755599 pod_ready.go:92] pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.498947  755599 pod_ready.go:81] duration metric: took 398.149039ms for pod "kube-controller-manager-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.498957  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.694452  755599 request.go:629] Waited for 195.421058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:12:49.694520  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fh6rg
	I0729 20:12:49.694525  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.694532  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.694536  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.697950  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.895039  755599 request.go:629] Waited for 196.366117ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:49.895109  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:49.895115  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:49.895122  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:49.895126  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:49.898150  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:49.898751  755599 pod_ready.go:92] pod "kube-proxy-fh6rg" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:49.898771  755599 pod_ready.go:81] duration metric: took 399.807911ms for pod "kube-proxy-fh6rg" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:49.898780  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.095225  755599 request.go:629] Waited for 196.35648ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:12:50.095292  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nfxp2
	I0729 20:12:50.095298  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.095305  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.095310  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.098510  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.294674  755599 request.go:629] Waited for 195.360527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:50.294771  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:50.294780  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.294791  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.294797  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.297738  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:50.298210  755599 pod_ready.go:92] pod "kube-proxy-nfxp2" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:50.298232  755599 pod_ready.go:81] duration metric: took 399.446317ms for pod "kube-proxy-nfxp2" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.298242  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s8wn5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.494281  755599 request.go:629] Waited for 195.962731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8wn5
	I0729 20:12:50.494378  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s8wn5
	I0729 20:12:50.494388  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.494395  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.494404  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.497661  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.694739  755599 request.go:629] Waited for 196.392215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:50.694845  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:50.694852  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.694860  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.694866  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.698157  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:50.698721  755599 pod_ready.go:92] pod "kube-proxy-s8wn5" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:50.698744  755599 pod_ready.go:81] duration metric: took 400.496066ms for pod "kube-proxy-s8wn5" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.698754  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:50.894780  755599 request.go:629] Waited for 195.954883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:12:50.894868  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518
	I0729 20:12:50.894874  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:50.894882  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:50.894886  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:50.898020  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.094575  755599 request.go:629] Waited for 196.002294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:51.094670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518
	I0729 20:12:51.094676  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.094685  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.094691  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.098002  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.098475  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.098493  755599 pod_ready.go:81] duration metric: took 399.73378ms for pod "kube-scheduler-ha-344518" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.098503  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.295276  755599 request.go:629] Waited for 196.695226ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:12:51.295371  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m02
	I0729 20:12:51.295377  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.295386  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.295398  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.298463  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.494604  755599 request.go:629] Waited for 195.512534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:51.494660  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m02
	I0729 20:12:51.494668  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.494678  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.494685  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.497553  755599 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0729 20:12:51.498039  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.498062  755599 pod_ready.go:81] duration metric: took 399.552682ms for pod "kube-scheduler-ha-344518-m02" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.498072  755599 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.695101  755599 request.go:629] Waited for 196.945766ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m03
	I0729 20:12:51.695189  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-344518-m03
	I0729 20:12:51.695196  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.695208  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.695212  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.698528  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.894595  755599 request.go:629] Waited for 195.391784ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:51.894670  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes/ha-344518-m03
	I0729 20:12:51.894678  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.894689  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.894695  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.897830  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:51.898422  755599 pod_ready.go:92] pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace has status "Ready":"True"
	I0729 20:12:51.898444  755599 pod_ready.go:81] duration metric: took 400.364758ms for pod "kube-scheduler-ha-344518-m03" in "kube-system" namespace to be "Ready" ...
	I0729 20:12:51.898456  755599 pod_ready.go:38] duration metric: took 5.199830746s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:12:51.898476  755599 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:12:51.898542  755599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:12:51.912905  755599 api_server.go:72] duration metric: took 23.063467882s to wait for apiserver process to appear ...
	I0729 20:12:51.912930  755599 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:12:51.912955  755599 api_server.go:253] Checking apiserver healthz at https://192.168.39.238:8443/healthz ...
	I0729 20:12:51.917598  755599 api_server.go:279] https://192.168.39.238:8443/healthz returned 200:
	ok
	I0729 20:12:51.917694  755599 round_trippers.go:463] GET https://192.168.39.238:8443/version
	I0729 20:12:51.917705  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:51.917718  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:51.917723  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:51.918594  755599 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0729 20:12:51.918833  755599 api_server.go:141] control plane version: v1.30.3
	I0729 20:12:51.918857  755599 api_server.go:131] duration metric: took 5.918903ms to wait for apiserver health ...
	I0729 20:12:51.918866  755599 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:12:52.095130  755599 request.go:629] Waited for 176.179213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.095216  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.095221  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.095229  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.095236  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.101815  755599 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0729 20:12:52.108555  755599 system_pods.go:59] 24 kube-system pods found
	I0729 20:12:52.108591  755599 system_pods.go:61] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:12:52.108598  755599 system_pods.go:61] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:12:52.108603  755599 system_pods.go:61] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:12:52.108608  755599 system_pods.go:61] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:12:52.108613  755599 system_pods.go:61] "etcd-ha-344518-m03" [1e322c16-d9d5-4bf8-99b1-de5db95a3965] Running
	I0729 20:12:52.108618  755599 system_pods.go:61] "kindnet-6qbz5" [cc428fce-2821-412d-b483-782bc277c4f7] Running
	I0729 20:12:52.108624  755599 system_pods.go:61] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:12:52.108634  755599 system_pods.go:61] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:12:52.108639  755599 system_pods.go:61] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:12:52.108645  755599 system_pods.go:61] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:12:52.108651  755599 system_pods.go:61] "kube-apiserver-ha-344518-m03" [4c708671-9ded-4b8e-80e4-58182a79597d] Running
	I0729 20:12:52.108658  755599 system_pods.go:61] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:12:52.108666  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:12:52.108672  755599 system_pods.go:61] "kube-controller-manager-ha-344518-m03" [9a23ca85-bda2-4023-b05d-b3c0ceba1e67] Running
	I0729 20:12:52.108677  755599 system_pods.go:61] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:12:52.108683  755599 system_pods.go:61] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:12:52.108691  755599 system_pods.go:61] "kube-proxy-s8wn5" [cd1b4894-f7bf-4249-a6d8-c89bbe6e2ab7] Running
	I0729 20:12:52.108697  755599 system_pods.go:61] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:12:52.108704  755599 system_pods.go:61] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:12:52.108710  755599 system_pods.go:61] "kube-scheduler-ha-344518-m03" [500b3aea-f25e-4aae-84d6-b261db07b35a] Running
	I0729 20:12:52.108716  755599 system_pods.go:61] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:12:52.108722  755599 system_pods.go:61] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:12:52.108728  755599 system_pods.go:61] "kube-vip-ha-344518-m03" [45610f87-5e2d-46c3-8f8f-ba77b685fd86] Running
	I0729 20:12:52.108733  755599 system_pods.go:61] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:12:52.108743  755599 system_pods.go:74] duration metric: took 189.869611ms to wait for pod list to return data ...
	I0729 20:12:52.108756  755599 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:12:52.295212  755599 request.go:629] Waited for 186.362246ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:12:52.295338  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/default/serviceaccounts
	I0729 20:12:52.295350  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.295362  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.295372  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.298650  755599 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0729 20:12:52.298814  755599 default_sa.go:45] found service account: "default"
	I0729 20:12:52.298835  755599 default_sa.go:55] duration metric: took 190.069659ms for default service account to be created ...
	I0729 20:12:52.298846  755599 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 20:12:52.494241  755599 request.go:629] Waited for 195.307096ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.494345  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/namespaces/kube-system/pods
	I0729 20:12:52.494353  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.494363  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.494371  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.508285  755599 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0729 20:12:52.516937  755599 system_pods.go:86] 24 kube-system pods found
	I0729 20:12:52.516968  755599 system_pods.go:89] "coredns-7db6d8ff4d-wzmc5" [2badd33a-9085-4e72-9934-f31c6142556e] Running
	I0729 20:12:52.516974  755599 system_pods.go:89] "coredns-7db6d8ff4d-xpkp6" [89bb48a7-72c4-4f23-aad8-530fc74e76e0] Running
	I0729 20:12:52.516978  755599 system_pods.go:89] "etcd-ha-344518" [2d9e6a92-a45e-41fc-9e29-e59128b7b830] Running
	I0729 20:12:52.516983  755599 system_pods.go:89] "etcd-ha-344518-m02" [6c6a4ddc-69fb-45bd-abbb-e51acb5da561] Running
	I0729 20:12:52.516986  755599 system_pods.go:89] "etcd-ha-344518-m03" [1e322c16-d9d5-4bf8-99b1-de5db95a3965] Running
	I0729 20:12:52.516990  755599 system_pods.go:89] "kindnet-6qbz5" [cc428fce-2821-412d-b483-782bc277c4f7] Running
	I0729 20:12:52.516994  755599 system_pods.go:89] "kindnet-jj2b4" [b53c635e-8077-466a-a171-23e84c33bd25] Running
	I0729 20:12:52.516998  755599 system_pods.go:89] "kindnet-nl4kz" [39441191-433d-4abc-b0c8-d4114713f68a] Running
	I0729 20:12:52.517001  755599 system_pods.go:89] "kube-apiserver-ha-344518" [aadbbdf5-6f91-4232-8c08-fc2f91cf35e5] Running
	I0729 20:12:52.517006  755599 system_pods.go:89] "kube-apiserver-ha-344518-m02" [2bc89a1d-0681-451a-bb47-0d82fbeb6a0f] Running
	I0729 20:12:52.517010  755599 system_pods.go:89] "kube-apiserver-ha-344518-m03" [4c708671-9ded-4b8e-80e4-58182a79597d] Running
	I0729 20:12:52.517014  755599 system_pods.go:89] "kube-controller-manager-ha-344518" [3c1f20e1-80d6-4bef-a115-d4e62d3d938e] Running
	I0729 20:12:52.517018  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m02" [31b506c1-6be7-4e9a-a96e-b2ac161edcab] Running
	I0729 20:12:52.517022  755599 system_pods.go:89] "kube-controller-manager-ha-344518-m03" [9a23ca85-bda2-4023-b05d-b3c0ceba1e67] Running
	I0729 20:12:52.517026  755599 system_pods.go:89] "kube-proxy-fh6rg" [275f3f36-39e1-461a-9c4d-4b2d8773d325] Running
	I0729 20:12:52.517030  755599 system_pods.go:89] "kube-proxy-nfxp2" [827466b6-aa03-4707-8594-b5eaaa864ebe] Running
	I0729 20:12:52.517033  755599 system_pods.go:89] "kube-proxy-s8wn5" [cd1b4894-f7bf-4249-a6d8-c89bbe6e2ab7] Running
	I0729 20:12:52.517037  755599 system_pods.go:89] "kube-scheduler-ha-344518" [e8ae3853-ac48-46fa-88b6-31b4c0f2c527] Running
	I0729 20:12:52.517041  755599 system_pods.go:89] "kube-scheduler-ha-344518-m02" [bd8f41d2-f637-4c19-8b66-7ffc1513d895] Running
	I0729 20:12:52.517045  755599 system_pods.go:89] "kube-scheduler-ha-344518-m03" [500b3aea-f25e-4aae-84d6-b261db07b35a] Running
	I0729 20:12:52.517049  755599 system_pods.go:89] "kube-vip-ha-344518" [140d2a2f-c461-421e-9b01-a5e6d7f2b9f8] Running
	I0729 20:12:52.517052  755599 system_pods.go:89] "kube-vip-ha-344518-m02" [6024c813-df16-43b4-83cc-e978ceb00d51] Running
	I0729 20:12:52.517057  755599 system_pods.go:89] "kube-vip-ha-344518-m03" [45610f87-5e2d-46c3-8f8f-ba77b685fd86] Running
	I0729 20:12:52.517061  755599 system_pods.go:89] "storage-provisioner" [9e8bd9d2-8adf-47de-8e32-05d64002a631] Running
	I0729 20:12:52.517068  755599 system_pods.go:126] duration metric: took 218.213547ms to wait for k8s-apps to be running ...
	I0729 20:12:52.517075  755599 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 20:12:52.517123  755599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:12:52.530943  755599 system_svc.go:56] duration metric: took 13.856488ms WaitForService to wait for kubelet
	I0729 20:12:52.530976  755599 kubeadm.go:582] duration metric: took 23.681542554s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:12:52.530998  755599 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:12:52.694327  755599 request.go:629] Waited for 163.250579ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.238:8443/api/v1/nodes
	I0729 20:12:52.694419  755599 round_trippers.go:463] GET https://192.168.39.238:8443/api/v1/nodes
	I0729 20:12:52.694426  755599 round_trippers.go:469] Request Headers:
	I0729 20:12:52.694438  755599 round_trippers.go:473]     Accept: application/json, */*
	I0729 20:12:52.694447  755599 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0729 20:12:52.699196  755599 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0729 20:12:52.700897  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700926  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700940  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700945  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700951  755599 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:12:52.700956  755599 node_conditions.go:123] node cpu capacity is 2
	I0729 20:12:52.700960  755599 node_conditions.go:105] duration metric: took 169.957801ms to run NodePressure ...
	I0729 20:12:52.700974  755599 start.go:241] waiting for startup goroutines ...
	I0729 20:12:52.701000  755599 start.go:255] writing updated cluster config ...
	I0729 20:12:52.701369  755599 ssh_runner.go:195] Run: rm -f paused
	I0729 20:12:52.753542  755599 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 20:12:52.756602  755599 out.go:177] * Done! kubectl is now configured to use "ha-344518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.653993822Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-fp24v,Uid:34dba935-70e7-453a-996e-56c88c2e27ab,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283974874650944,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:12:53.665052560Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wzmc5,Uid:2badd33a-9085-4e72-9934-f31c6142556e,Namespace:kube-system,Attempt:0,},State:S
ANDBOX_READY,CreatedAt:1722283817499345893,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:17.190603996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e8bd9d2-8adf-47de-8e32-05d64002a631,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283817495975457,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{kubec
tl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T20:10:17.190379867Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpkp6,Uid:89bb48a7-72c4-4f23-aad8-530fc74e76e0,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1722283817490483707,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:17.183921831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&PodSandboxMetadata{Name:kube-proxy-fh6rg,Uid:275f3f36-39e1-461a-9c4d-4b2d8773d325,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283802210509181,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]
string{kubernetes.io/config.seen: 2024-07-29T20:10:01.298076534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&PodSandboxMetadata{Name:kindnet-nl4kz,Uid:39441191-433d-4abc-b0c8-d4114713f68a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283802192115435,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:01.284341089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-344518,Uid:cd59779c0bf07be17ee08a6f723c6a83,Namespace:kube-system,
Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283781226892059,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cd59779c0bf07be17ee08a6f723c6a83,kubernetes.io/config.seen: 2024-07-29T20:09:40.750756996Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-344518,Uid:0fe3753966d0edf57072c858a7289147,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283781209594127,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a728914
7,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 0fe3753966d0edf57072c858a7289147,kubernetes.io/config.seen: 2024-07-29T20:09:40.750755949Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-344518,Uid:1a4f4fa7d6914af3b75fc6bf4496723b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283781208804484,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a4f4fa7d6914af3b75fc6bf4496723b,kubernetes.io/config.seen: 2024-07-29T20:09:40.750749661Z,kubernetes.io/config.source: file,},RuntimeHandler:,
},&PodSandbox{Id:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-344518,Uid:9219d8412b921256fe48925a08aef04f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283781207990853,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{kubernetes.io/config.hash: 9219d8412b921256fe48925a08aef04f,kubernetes.io/config.seen: 2024-07-29T20:09:40.750753299Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&PodSandboxMetadata{Name:etcd-ha-344518,Uid:2baca04111e38314ac51bacec8d115e3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1722283781204804104,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-344518,i
o.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: 2baca04111e38314ac51bacec8d115e3,kubernetes.io/config.seen: 2024-07-29T20:09:40.750754470Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1dd80384-908a-4c6a-83e4-b7e641676a93 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.655273592Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=44448f7f-8b9d-4b91-a29d-cfb20a9b2301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.655355259Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=44448f7f-8b9d-4b91-a29d-cfb20a9b2301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.655678814Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=44448f7f-8b9d-4b91-a29d-cfb20a9b2301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.662307188Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e81927a5-4270-4557-8760-ba4b2e312c7b name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.663004087Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e81927a5-4270-4557-8760-ba4b2e312c7b name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.665983179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7cb9cf1e-de75-4d60-b9d5-04fec7a38245 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.666715067Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284255666687220,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7cb9cf1e-de75-4d60-b9d5-04fec7a38245 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.667360161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0c16287-7fc8-4b2f-a384-0d7886965479 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.667448542Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0c16287-7fc8-4b2f-a384-0d7886965479 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.667752054Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0c16287-7fc8-4b2f-a384-0d7886965479 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.705330336Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08e906f6-5c73-45ff-aa9c-b27655ec2883 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.705419192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08e906f6-5c73-45ff-aa9c-b27655ec2883 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.706300861Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5ade3ba-38be-4dbe-8b94-77826e466c40 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.706757442Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284255706733082,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5ade3ba-38be-4dbe-8b94-77826e466c40 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.707401437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bc5c280d-40fc-4a41-a5b1-6212ba835ca2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.707471909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bc5c280d-40fc-4a41-a5b1-6212ba835ca2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.707826696Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bc5c280d-40fc-4a41-a5b1-6212ba835ca2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.754436949Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0a6ea24a-b179-49d8-8701-047b562dbf58 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.754510003Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0a6ea24a-b179-49d8-8701-047b562dbf58 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.755754282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca3d47d7-475a-4217-a214-4b067afede9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.756416724Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284255756388664,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca3d47d7-475a-4217-a214-4b067afede9c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.757027222Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=333f9eaa-2f25-48e3-89bd-5278871f8eb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.757098809Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=333f9eaa-2f25-48e3-89bd-5278871f8eb2 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:17:35 ha-344518 crio[679]: time="2024-07-29 20:17:35.757451027Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722283977503459045,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea,PodSandboxId:f573eda8597209c29238367b1f588877b95eb9b1d83c0fe5ec4559abd73e9f9e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722283817758357207,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817764517491,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722283817701820768,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-90
85-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CO
NTAINER_RUNNING,CreatedAt:1722283806075671801,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:172228380
2307884166,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb,PodSandboxId:b4ddbe2050711fe94070483c80962bd7e541ed1a648aeb3a3d80e24b4473e69d,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:17222837840
59244100,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9219d8412b921256fe48925a08aef04f,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722283781396419801,Labels:map[string]string{io.kubern
etes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722283781452307427,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.ku
bernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00,PodSandboxId:a370dcc0d3fedf538e097c9771a00ae71d07e4d428cf20405b91bab4226a52f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722283781401013403,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kuber
netes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7,PodSandboxId:be0b5e7879a7b2011da181d648a80b8faeacd356119b7dd220aa8c4bc5e91e21,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722283781423950675,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=333f9eaa-2f25-48e3-89bd-5278871f8eb2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	962f37271e54d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   4fd5554044288       busybox-fc5497c4f-fp24v
	7bed7bb792810       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   e6598d2da30cd       coredns-7db6d8ff4d-xpkp6
	150057459b685       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   f573eda859720       storage-provisioner
	4d27dc2036f3c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      7 minutes ago       Running             coredns                   0                   ffb2234aef191       coredns-7db6d8ff4d-wzmc5
	594577e4d332f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    7 minutes ago       Running             kindnet-cni               0                   aa3121e476fc2       kindnet-nl4kz
	d79e4f49251f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      7 minutes ago       Running             kube-proxy                0                   08408a18bb915       kube-proxy-fh6rg
	a5bf9f11f4034       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     7 minutes ago       Running             kube-vip                  0                   b4ddbe2050711       kube-vip-ha-344518
	1121b90510c21       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      7 minutes ago       Running             kube-scheduler            0                   b61bed291d877       kube-scheduler-ha-344518
	d1cab255995a7       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      7 minutes ago       Running             kube-apiserver            0                   be0b5e7879a7b       kube-apiserver-ha-344518
	3e957bb1c15cb       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      7 minutes ago       Running             kube-controller-manager   0                   a370dcc0d3fed       kube-controller-manager-ha-344518
	a0e14d313861e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      7 minutes ago       Running             etcd                      0                   259cc56efacfd       etcd-ha-344518
	
	
	==> coredns [4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a] <==
	[INFO] 10.244.0.4:37485 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000059285s
	[INFO] 10.244.1.2:48771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251747s
	[INFO] 10.244.1.2:44435 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001414617s
	[INFO] 10.244.2.2:38735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112057s
	[INFO] 10.244.2.2:35340 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003153904s
	[INFO] 10.244.2.2:54596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140336s
	[INFO] 10.244.0.4:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949954s
	[INFO] 10.244.0.4:39933 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113699s
	[INFO] 10.244.0.4:54725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150049s
	[INFO] 10.244.1.2:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115875s
	[INFO] 10.244.1.2:54023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742745s
	[INFO] 10.244.1.2:51538 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140285s
	[INFO] 10.244.1.2:56008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088578s
	[INFO] 10.244.2.2:44895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095319s
	[INFO] 10.244.2.2:40784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167082s
	[INFO] 10.244.0.4:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120067s
	[INFO] 10.244.0.4:39840 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111609s
	[INFO] 10.244.0.4:38416 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058031s
	[INFO] 10.244.1.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176608s
	[INFO] 10.244.2.2:48597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139446s
	[INFO] 10.244.2.2:51477 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106731s
	[INFO] 10.244.0.4:47399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109762s
	[INFO] 10.244.0.4:48496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126806s
	[INFO] 10.244.1.2:33090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183559s
	[INFO] 10.244.1.2:58207 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095513s
	
	
	==> coredns [7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c] <==
	[INFO] 10.244.2.2:45817 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00025205s
	[INFO] 10.244.2.2:60259 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158897s
	[INFO] 10.244.2.2:59354 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000146719s
	[INFO] 10.244.2.2:40109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117861s
	[INFO] 10.244.0.4:43889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020394s
	[INFO] 10.244.0.4:34685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072181s
	[INFO] 10.244.0.4:59825 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335615s
	[INFO] 10.244.0.4:51461 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176686s
	[INFO] 10.244.0.4:35140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051586s
	[INFO] 10.244.1.2:54871 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115274s
	[INFO] 10.244.1.2:51590 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001521426s
	[INFO] 10.244.1.2:60677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011059s
	[INFO] 10.244.1.2:48005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106929s
	[INFO] 10.244.2.2:58992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110446s
	[INFO] 10.244.2.2:41728 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108732s
	[INFO] 10.244.0.4:38164 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104442s
	[INFO] 10.244.1.2:47258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118558s
	[INFO] 10.244.1.2:38089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092315s
	[INFO] 10.244.1.2:33841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075348s
	[INFO] 10.244.2.2:33549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013334s
	[INFO] 10.244.2.2:53967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203235s
	[INFO] 10.244.0.4:37211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128698s
	[INFO] 10.244.0.4:50842 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112886s
	[INFO] 10.244.1.2:51560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281444s
	[INFO] 10.244.1.2:48121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072064s
	
	
	==> describe nodes <==
	Name:               ha-344518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:17:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:13:21 +0000   Mon, 29 Jul 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-344518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58926cc84a1545f2aed136a3e761f2be
	  System UUID:                58926cc8-4a15-45f2-aed1-36a3e761f2be
	  Boot ID:                    53511801-74aa-43cb-9108-0a1fffab4f32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fp24v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 coredns-7db6d8ff4d-wzmc5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m35s
	  kube-system                 coredns-7db6d8ff4d-xpkp6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m35s
	  kube-system                 etcd-ha-344518                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m49s
	  kube-system                 kindnet-nl4kz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m35s
	  kube-system                 kube-apiserver-ha-344518             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-controller-manager-ha-344518    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-proxy-fh6rg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	  kube-system                 kube-scheduler-ha-344518             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-vip-ha-344518                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m51s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m33s  kube-proxy       
	  Normal  Starting                 7m49s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m49s  kubelet          Node ha-344518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m49s  kubelet          Node ha-344518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s  kubelet          Node ha-344518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m36s  node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal  NodeReady                7m19s  kubelet          Node ha-344518 status is now: NodeReady
	  Normal  RegisteredNode           6m3s   node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal  RegisteredNode           4m53s  node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	
	
	Name:               ha-344518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:11:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:14:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 20:13:18 +0000   Mon, 29 Jul 2024 20:14:48 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-344518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e624f7b4f7644519a6f4690f28614c0
	  System UUID:                9e624f7b-4f76-4451-9a6f-4690f28614c0
	  Boot ID:                    e119378b-e8db-4356-9172-068b6b98830d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xn8rr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-344518-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m15s
	  kube-system                 kindnet-jj2b4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m21s
	  kube-system                 kube-apiserver-ha-344518-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-344518-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m12s
	  kube-system                 kube-proxy-nfxp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m21s
	  kube-system                 kube-scheduler-ha-344518-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m13s
	  kube-system                 kube-vip-ha-344518-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m16s                  kube-proxy       
	  Normal  RegisteredNode           6m21s                  node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m21s (x8 over 6m21s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m21s (x8 over 6m21s)  kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m21s (x7 over 6m21s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m3s                   node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeNotReady             2m48s                  node-controller  Node ha-344518-m02 status is now: NodeNotReady
	
	
	Name:               ha-344518-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_12_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:17:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:13:26 +0000   Mon, 29 Jul 2024 20:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-344518-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 41330caf582148fd80914bd6e0732453
	  System UUID:                41330caf-5821-48fd-8091-4bd6e0732453
	  Boot ID:                    2135b6f7-7490-484b-8671-5d7e83df96c0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22rcc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 etcd-ha-344518-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m9s
	  kube-system                 kindnet-6qbz5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m11s
	  kube-system                 kube-apiserver-ha-344518-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-controller-manager-ha-344518-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-s8wn5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-scheduler-ha-344518-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-vip-ha-344518-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m5s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  5m11s (x8 over 5m11s)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s (x8 over 5m11s)  kubelet          Node ha-344518-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s (x7 over 5m11s)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal  RegisteredNode           5m6s                   node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal  RegisteredNode           4m53s                  node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	
	
	Name:               ha-344518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_13_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:13:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:17:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:14:01 +0000   Mon, 29 Jul 2024 20:13:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-344518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a26135ecab4ebcafa4c947c9d6f013
	  System UUID:                d8a26135-ecab-4ebc-afa4-c947c9d6f013
	  Boot ID:                    245dfa10-a723-4afd-9297-c2f80c37bd37
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4m6xw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m6s
	  kube-system                 kube-proxy-947zc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  NodeHasSufficientMemory  4m6s (x2 over 4m6s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x2 over 4m6s)  kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x2 over 4m6s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  RegisteredNode           4m3s                 node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal  NodeReady                3m46s                kubelet          Node ha-344518-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul29 20:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050285] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036102] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.678115] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.781096] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.549111] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.281405] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.054666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050707] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.158935] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.126079] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.245623] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.820743] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.869843] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.068841] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.242210] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.084855] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 20:10] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.358609] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 20:11] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50] <==
	{"level":"warn","ts":"2024-07-29T20:17:36.011302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.021632Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.028904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.034991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.038173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.041764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.050118Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.057991Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.063438Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.064316Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.067349Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.070939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.077506Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.078515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.087Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.08781Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.089039Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.091447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.096661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.100925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.103659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.110762Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.116686Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.125925Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-07-29T20:17:36.162792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"fff3906243738b90","from":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:17:36 up 8 min,  0 users,  load average: 0.25, 0.21, 0.11
	Linux ha-344518 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f] <==
	I0729 20:16:57.004389       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:17:06.996565       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:17:06.996601       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:17:06.996727       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:17:06.996747       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:17:06.996814       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:17:06.996833       1 main.go:299] handling current node
	I0729 20:17:06.996846       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:17:06.996850       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:17:17.004379       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:17:17.004497       1 main.go:299] handling current node
	I0729 20:17:17.004527       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:17:17.004546       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:17:17.004689       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:17:17.004711       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:17:17.004772       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:17:17.004791       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:17:27.004356       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:17:27.004405       1 main.go:299] handling current node
	I0729 20:17:27.004427       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:17:27.004433       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:17:27.004615       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:17:27.004636       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:17:27.004706       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:17:27.004711       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7] <==
	I0729 20:09:47.704347       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 20:09:47.719304       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0729 20:09:47.730068       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 20:10:01.225707       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0729 20:10:01.336350       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0729 20:12:58.853300       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54572: use of closed network connection
	E0729 20:12:59.045359       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54594: use of closed network connection
	E0729 20:12:59.243152       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54616: use of closed network connection
	E0729 20:12:59.420160       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54646: use of closed network connection
	E0729 20:12:59.607976       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54662: use of closed network connection
	E0729 20:12:59.793614       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54688: use of closed network connection
	E0729 20:12:59.975057       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54710: use of closed network connection
	E0729 20:13:00.147484       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54736: use of closed network connection
	E0729 20:13:00.628852       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54780: use of closed network connection
	E0729 20:13:00.800905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54798: use of closed network connection
	E0729 20:13:00.976476       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54822: use of closed network connection
	E0729 20:13:01.153866       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54848: use of closed network connection
	E0729 20:13:01.332956       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54866: use of closed network connection
	E0729 20:13:01.519905       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:54882: use of closed network connection
	I0729 20:13:32.787350       1 trace.go:236] Trace[1732495531]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:2730872b-4fc5-4dad-9025-244522ad211d,client:192.168.39.70,api-group:,api-version:v1,name:kindnet,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kindnet/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 20:13:32.148) (total time: 638ms):
	Trace[1732495531]: ---"watchCache locked acquired" 636ms (20:13:32.784)
	Trace[1732495531]: [638.590252ms] [638.590252ms] END
	I0729 20:13:32.945993       1 trace.go:236] Trace[1011466498]: "Delete" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:8c30973c-dc0f-460a-aab1-8468700473ee,client:192.168.39.70,api-group:,api-version:v1,name:kube-proxy-zwtzc,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-proxy-zwtzc,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:DELETE (29-Jul-2024 20:13:32.129) (total time: 815ms):
	Trace[1011466498]: ---"Object deleted from database" 525ms (20:13:32.945)
	Trace[1011466498]: [815.977524ms] [815.977524ms] END
	
	
	==> kube-controller-manager [3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00] <==
	I0729 20:12:25.735929       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344518-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:12:30.571336       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344518-m03"
	I0729 20:12:53.686251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="127.61757ms"
	I0729 20:12:53.787072       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.728085ms"
	I0729 20:12:53.909984       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="122.430787ms"
	I0729 20:12:53.936261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.078542ms"
	I0729 20:12:53.936450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="78.308µs"
	I0729 20:12:54.008040       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.967339ms"
	I0729 20:12:54.008156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="79.547µs"
	I0729 20:12:55.373149       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.524µs"
	I0729 20:12:55.662573       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.19µs"
	I0729 20:12:56.787645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.464186ms"
	I0729 20:12:56.787756       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.972µs"
	I0729 20:12:57.969346       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.718317ms"
	I0729 20:12:57.970264       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="129.528µs"
	I0729 20:12:58.438280       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.631889ms"
	I0729 20:12:58.438474       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.744µs"
	E0729 20:13:30.037081       1 certificate_controller.go:146] Sync csr-xvt5c failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-xvt5c": the object has been modified; please apply your changes to the latest version and try again
	I0729 20:13:30.312902       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-344518-m04\" does not exist"
	I0729 20:13:30.349782       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-344518-m04" podCIDRs=["10.244.3.0/24"]
	I0729 20:13:30.583559       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-344518-m04"
	I0729 20:13:50.809435       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344518-m04"
	I0729 20:14:48.871711       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344518-m04"
	I0729 20:14:49.083520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.785544ms"
	I0729 20:14:49.083670       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="67.226µs"
	
	
	==> kube-proxy [d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454] <==
	I0729 20:10:02.484332       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:10:02.506903       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0729 20:10:02.566932       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:10:02.567033       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:10:02.567075       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:10:02.570607       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:10:02.570991       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:10:02.571273       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:10:02.574335       1 config.go:192] "Starting service config controller"
	I0729 20:10:02.574750       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:10:02.574896       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:10:02.574926       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:10:02.579386       1 config.go:319] "Starting node config controller"
	I0729 20:10:02.579463       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:10:02.675994       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:10:02.676221       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:10:02.680431       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be] <==
	E0729 20:09:45.292279       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 20:09:45.309301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:09:45.309396       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:09:45.371422       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 20:09:45.371525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 20:09:45.509150       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:09:45.509278       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:09:45.542301       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 20:09:45.542419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 20:09:45.551642       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 20:09:45.553621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 20:09:45.646656       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:09:45.647288       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 20:09:45.665246       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:09:45.665351       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0729 20:09:48.152261       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 20:12:53.607480       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="33523618-d46c-4dc1-9aa3-c3f217c7903f" pod="default/busybox-fc5497c4f-xn8rr" assumedNode="ha-344518-m02" currentNode="ha-344518-m03"
	E0729 20:12:53.637136       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xn8rr\": pod busybox-fc5497c4f-xn8rr is already assigned to node \"ha-344518-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-xn8rr" node="ha-344518-m03"
	E0729 20:12:53.637339       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 33523618-d46c-4dc1-9aa3-c3f217c7903f(default/busybox-fc5497c4f-xn8rr) was assumed on ha-344518-m03 but assigned to ha-344518-m02" pod="default/busybox-fc5497c4f-xn8rr"
	E0729 20:12:53.638078       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-xn8rr\": pod busybox-fc5497c4f-xn8rr is already assigned to node \"ha-344518-m02\"" pod="default/busybox-fc5497c4f-xn8rr"
	I0729 20:12:53.641620       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-xn8rr" node="ha-344518-m02"
	E0729 20:12:53.671517       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-22rcc\": pod busybox-fc5497c4f-22rcc is already assigned to node \"ha-344518-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-22rcc" node="ha-344518-m03"
	E0729 20:12:53.671567       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 89bd9cd8-932d-4941-bd9f-ecf2f6f90c07(default/busybox-fc5497c4f-22rcc) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-22rcc"
	E0729 20:12:53.671623       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-22rcc\": pod busybox-fc5497c4f-22rcc is already assigned to node \"ha-344518-m03\"" pod="default/busybox-fc5497c4f-22rcc"
	I0729 20:12:53.671656       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-22rcc" node="ha-344518-m03"
	
	
	==> kubelet <==
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.664371    1384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=172.664276647 podStartE2EDuration="2m52.664276647s" podCreationTimestamp="2024-07-29 20:10:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 20:10:18.872756869 +0000 UTC m=+31.378717754" watchObservedRunningTime="2024-07-29 20:12:53.664276647 +0000 UTC m=+186.170237534"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.665327    1384 topology_manager.go:215] "Topology Admit Handler" podUID="34dba935-70e7-453a-996e-56c88c2e27ab" podNamespace="default" podName="busybox-fc5497c4f-fp24v"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: I0729 20:12:53.667873    1384 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v678\" (UniqueName: \"kubernetes.io/projected/34dba935-70e7-453a-996e-56c88c2e27ab-kube-api-access-2v678\") pod \"busybox-fc5497c4f-fp24v\" (UID: \"34dba935-70e7-453a-996e-56c88c2e27ab\") " pod="default/busybox-fc5497c4f-fp24v"
	Jul 29 20:12:53 ha-344518 kubelet[1384]: W0729 20:12:53.676080    1384 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-344518" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-344518' and this object
	Jul 29 20:12:53 ha-344518 kubelet[1384]: E0729 20:12:53.676252    1384 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ha-344518" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'ha-344518' and this object
	Jul 29 20:13:47 ha-344518 kubelet[1384]: E0729 20:13:47.709538    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:13:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:13:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:14:47 ha-344518 kubelet[1384]: E0729 20:14:47.709167    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:14:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:14:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:15:47 ha-344518 kubelet[1384]: E0729 20:15:47.709788    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:15:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:15:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:16:47 ha-344518 kubelet[1384]: E0729 20:16:47.709571    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:16:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:16:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:16:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:16:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344518 -n ha-344518
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (60.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-344518 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-344518 -v=7 --alsologtostderr
E0729 20:18:14.089916  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:18:41.776127  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-344518 -v=7 --alsologtostderr: exit status 82 (2m1.764072473s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344518-m04"  ...
	* Stopping node "ha-344518-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:17:37.557844  761569 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:17:37.558130  761569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:37.558141  761569 out.go:304] Setting ErrFile to fd 2...
	I0729 20:17:37.558145  761569 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:17:37.558353  761569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:17:37.558620  761569 out.go:298] Setting JSON to false
	I0729 20:17:37.558728  761569 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:37.559101  761569 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:37.559232  761569 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:17:37.559465  761569 mustload.go:65] Loading cluster: ha-344518
	I0729 20:17:37.559655  761569 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:17:37.559693  761569 stop.go:39] StopHost: ha-344518-m04
	I0729 20:17:37.560119  761569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:37.560171  761569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:37.575789  761569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46681
	I0729 20:17:37.576299  761569 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:37.576844  761569 main.go:141] libmachine: Using API Version  1
	I0729 20:17:37.576867  761569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:37.577231  761569 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:37.579799  761569 out.go:177] * Stopping node "ha-344518-m04"  ...
	I0729 20:17:37.580934  761569 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 20:17:37.580977  761569 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:17:37.581196  761569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 20:17:37.581219  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:17:37.583969  761569 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:37.584377  761569 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:13:16 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:17:37.584409  761569 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:17:37.584555  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:17:37.584738  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:17:37.584894  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:17:37.585046  761569 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:17:37.666235  761569 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 20:17:37.719286  761569 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 20:17:37.773143  761569 main.go:141] libmachine: Stopping "ha-344518-m04"...
	I0729 20:17:37.773181  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:37.774813  761569 main.go:141] libmachine: (ha-344518-m04) Calling .Stop
	I0729 20:17:37.778676  761569 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 0/120
	I0729 20:17:38.867096  761569 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:17:38.868471  761569 main.go:141] libmachine: Machine "ha-344518-m04" was stopped.
	I0729 20:17:38.868495  761569 stop.go:75] duration metric: took 1.287566474s to stop
	I0729 20:17:38.868517  761569 stop.go:39] StopHost: ha-344518-m03
	I0729 20:17:38.868836  761569 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:17:38.868887  761569 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:17:38.884626  761569 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35029
	I0729 20:17:38.885153  761569 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:17:38.885753  761569 main.go:141] libmachine: Using API Version  1
	I0729 20:17:38.885778  761569 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:17:38.886117  761569 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:17:38.888472  761569 out.go:177] * Stopping node "ha-344518-m03"  ...
	I0729 20:17:38.889609  761569 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 20:17:38.889637  761569 main.go:141] libmachine: (ha-344518-m03) Calling .DriverName
	I0729 20:17:38.889858  761569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 20:17:38.889889  761569 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHHostname
	I0729 20:17:38.893277  761569 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:38.893704  761569 main.go:141] libmachine: (ha-344518-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:36:90:07", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:11:53 +0000 UTC Type:0 Mac:52:54:00:36:90:07 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:ha-344518-m03 Clientid:01:52:54:00:36:90:07}
	I0729 20:17:38.893737  761569 main.go:141] libmachine: (ha-344518-m03) DBG | domain ha-344518-m03 has defined IP address 192.168.39.53 and MAC address 52:54:00:36:90:07 in network mk-ha-344518
	I0729 20:17:38.893877  761569 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHPort
	I0729 20:17:38.894054  761569 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHKeyPath
	I0729 20:17:38.894207  761569 main.go:141] libmachine: (ha-344518-m03) Calling .GetSSHUsername
	I0729 20:17:38.894362  761569 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m03/id_rsa Username:docker}
	I0729 20:17:38.978867  761569 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 20:17:39.030954  761569 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 20:17:39.069508  761569 main.go:141] libmachine: Stopping "ha-344518-m03"...
	I0729 20:17:39.069535  761569 main.go:141] libmachine: (ha-344518-m03) Calling .GetState
	I0729 20:17:39.071237  761569 main.go:141] libmachine: (ha-344518-m03) Calling .Stop
	I0729 20:17:39.074862  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 0/120
	I0729 20:17:40.076663  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 1/120
	I0729 20:17:41.078083  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 2/120
	I0729 20:17:42.079250  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 3/120
	I0729 20:17:43.081004  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 4/120
	I0729 20:17:44.083209  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 5/120
	I0729 20:17:45.084765  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 6/120
	I0729 20:17:46.086734  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 7/120
	I0729 20:17:47.088165  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 8/120
	I0729 20:17:48.089720  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 9/120
	I0729 20:17:49.091875  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 10/120
	I0729 20:17:50.093242  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 11/120
	I0729 20:17:51.094869  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 12/120
	I0729 20:17:52.096636  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 13/120
	I0729 20:17:53.098178  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 14/120
	I0729 20:17:54.100402  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 15/120
	I0729 20:17:55.102852  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 16/120
	I0729 20:17:56.104318  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 17/120
	I0729 20:17:57.106092  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 18/120
	I0729 20:17:58.107568  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 19/120
	I0729 20:17:59.109262  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 20/120
	I0729 20:18:00.110657  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 21/120
	I0729 20:18:01.112212  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 22/120
	I0729 20:18:02.113616  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 23/120
	I0729 20:18:03.115274  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 24/120
	I0729 20:18:04.117377  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 25/120
	I0729 20:18:05.119229  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 26/120
	I0729 20:18:06.120733  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 27/120
	I0729 20:18:07.122371  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 28/120
	I0729 20:18:08.123851  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 29/120
	I0729 20:18:09.125854  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 30/120
	I0729 20:18:10.127238  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 31/120
	I0729 20:18:11.128971  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 32/120
	I0729 20:18:12.130361  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 33/120
	I0729 20:18:13.131706  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 34/120
	I0729 20:18:14.133753  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 35/120
	I0729 20:18:15.134982  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 36/120
	I0729 20:18:16.136518  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 37/120
	I0729 20:18:17.137844  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 38/120
	I0729 20:18:18.139413  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 39/120
	I0729 20:18:19.141580  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 40/120
	I0729 20:18:20.143068  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 41/120
	I0729 20:18:21.144707  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 42/120
	I0729 20:18:22.146612  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 43/120
	I0729 20:18:23.148133  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 44/120
	I0729 20:18:24.150086  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 45/120
	I0729 20:18:25.151382  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 46/120
	I0729 20:18:26.153299  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 47/120
	I0729 20:18:27.154751  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 48/120
	I0729 20:18:28.156812  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 49/120
	I0729 20:18:29.158744  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 50/120
	I0729 20:18:30.160364  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 51/120
	I0729 20:18:31.162742  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 52/120
	I0729 20:18:32.164180  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 53/120
	I0729 20:18:33.165733  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 54/120
	I0729 20:18:34.167979  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 55/120
	I0729 20:18:35.169591  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 56/120
	I0729 20:18:36.170843  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 57/120
	I0729 20:18:37.172333  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 58/120
	I0729 20:18:38.173605  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 59/120
	I0729 20:18:39.175955  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 60/120
	I0729 20:18:40.177677  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 61/120
	I0729 20:18:41.179208  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 62/120
	I0729 20:18:42.180860  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 63/120
	I0729 20:18:43.182387  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 64/120
	I0729 20:18:44.184087  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 65/120
	I0729 20:18:45.185379  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 66/120
	I0729 20:18:46.186776  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 67/120
	I0729 20:18:47.188386  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 68/120
	I0729 20:18:48.189952  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 69/120
	I0729 20:18:49.191875  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 70/120
	I0729 20:18:50.193423  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 71/120
	I0729 20:18:51.194854  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 72/120
	I0729 20:18:52.196504  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 73/120
	I0729 20:18:53.197921  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 74/120
	I0729 20:18:54.199637  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 75/120
	I0729 20:18:55.201028  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 76/120
	I0729 20:18:56.202486  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 77/120
	I0729 20:18:57.203926  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 78/120
	I0729 20:18:58.205475  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 79/120
	I0729 20:18:59.207297  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 80/120
	I0729 20:19:00.208770  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 81/120
	I0729 20:19:01.210481  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 82/120
	I0729 20:19:02.211990  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 83/120
	I0729 20:19:03.213349  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 84/120
	I0729 20:19:04.214816  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 85/120
	I0729 20:19:05.216168  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 86/120
	I0729 20:19:06.218535  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 87/120
	I0729 20:19:07.220006  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 88/120
	I0729 20:19:08.221418  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 89/120
	I0729 20:19:09.223602  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 90/120
	I0729 20:19:10.224898  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 91/120
	I0729 20:19:11.226355  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 92/120
	I0729 20:19:12.228472  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 93/120
	I0729 20:19:13.229914  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 94/120
	I0729 20:19:14.232482  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 95/120
	I0729 20:19:15.233958  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 96/120
	I0729 20:19:16.235381  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 97/120
	I0729 20:19:17.236813  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 98/120
	I0729 20:19:18.238259  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 99/120
	I0729 20:19:19.240139  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 100/120
	I0729 20:19:20.241594  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 101/120
	I0729 20:19:21.243007  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 102/120
	I0729 20:19:22.244623  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 103/120
	I0729 20:19:23.246508  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 104/120
	I0729 20:19:24.248124  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 105/120
	I0729 20:19:25.249510  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 106/120
	I0729 20:19:26.250821  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 107/120
	I0729 20:19:27.252350  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 108/120
	I0729 20:19:28.253929  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 109/120
	I0729 20:19:29.255963  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 110/120
	I0729 20:19:30.257371  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 111/120
	I0729 20:19:31.258887  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 112/120
	I0729 20:19:32.260397  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 113/120
	I0729 20:19:33.261980  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 114/120
	I0729 20:19:34.264609  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 115/120
	I0729 20:19:35.266145  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 116/120
	I0729 20:19:36.267520  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 117/120
	I0729 20:19:37.268935  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 118/120
	I0729 20:19:38.270321  761569 main.go:141] libmachine: (ha-344518-m03) Waiting for machine to stop 119/120
	I0729 20:19:39.271057  761569 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 20:19:39.271153  761569 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 20:19:39.273114  761569 out.go:177] 
	W0729 20:19:39.274577  761569 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 20:19:39.274597  761569 out.go:239] * 
	* 
	W0729 20:19:39.277669  761569 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 20:19:39.278831  761569 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-344518 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344518 --wait=true -v=7 --alsologtostderr
E0729 20:23:14.090663  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-344518 --wait=true -v=7 --alsologtostderr: (4m2.340053465s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-344518
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344518 -n ha-344518
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344518 logs -n 25: (1.742652402s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m04 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp testdata/cp-test.txt                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m04_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03:/home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m03 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-344518 node stop m02 -v=7                                                     | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-344518 node start m02 -v=7                                                    | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-344518 -v=7                                                           | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-344518 -v=7                                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-344518 --wait=true -v=7                                                    | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:19 UTC | 29 Jul 24 20:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-344518                                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:23 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:19:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:19:39.326027  762482 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:19:39.326148  762482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:19:39.326157  762482 out.go:304] Setting ErrFile to fd 2...
	I0729 20:19:39.326161  762482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:19:39.326347  762482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:19:39.326905  762482 out.go:298] Setting JSON to false
	I0729 20:19:39.327899  762482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14526,"bootTime":1722269853,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:19:39.327965  762482 start.go:139] virtualization: kvm guest
	I0729 20:19:39.330992  762482 out.go:177] * [ha-344518] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:19:39.332536  762482 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:19:39.332574  762482 notify.go:220] Checking for updates...
	I0729 20:19:39.335386  762482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:19:39.336598  762482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:19:39.337778  762482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:19:39.339087  762482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:19:39.340542  762482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:19:39.342366  762482 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:19:39.342512  762482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:19:39.343163  762482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:19:39.343267  762482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:19:39.358835  762482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0729 20:19:39.359304  762482 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:19:39.359949  762482 main.go:141] libmachine: Using API Version  1
	I0729 20:19:39.359972  762482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:19:39.360420  762482 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:19:39.360643  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.396513  762482 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:19:39.397700  762482 start.go:297] selected driver: kvm2
	I0729 20:19:39.397713  762482 start.go:901] validating driver "kvm2" against &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:19:39.397853  762482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:19:39.398178  762482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:19:39.398249  762482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:19:39.414151  762482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:19:39.414862  762482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:19:39.414893  762482 cni.go:84] Creating CNI manager for ""
	I0729 20:19:39.414899  762482 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 20:19:39.414974  762482 start.go:340] cluster config:
	{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:19:39.415107  762482 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:19:39.418022  762482 out.go:177] * Starting "ha-344518" primary control-plane node in "ha-344518" cluster
	I0729 20:19:39.419422  762482 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:19:39.419468  762482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:19:39.419480  762482 cache.go:56] Caching tarball of preloaded images
	I0729 20:19:39.419613  762482 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:19:39.419627  762482 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:19:39.419768  762482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:19:39.419988  762482 start.go:360] acquireMachinesLock for ha-344518: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:19:39.420055  762482 start.go:364] duration metric: took 43.383µs to acquireMachinesLock for "ha-344518"
	I0729 20:19:39.420077  762482 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:19:39.420085  762482 fix.go:54] fixHost starting: 
	I0729 20:19:39.420366  762482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:19:39.420403  762482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:19:39.436580  762482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0729 20:19:39.437057  762482 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:19:39.437566  762482 main.go:141] libmachine: Using API Version  1
	I0729 20:19:39.437610  762482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:19:39.437965  762482 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:19:39.438204  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.438357  762482 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:19:39.440159  762482 fix.go:112] recreateIfNeeded on ha-344518: state=Running err=<nil>
	W0729 20:19:39.440191  762482 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:19:39.442107  762482 out.go:177] * Updating the running kvm2 "ha-344518" VM ...
	I0729 20:19:39.443560  762482 machine.go:94] provisionDockerMachine start ...
	I0729 20:19:39.443586  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.443815  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.447224  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.447838  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.447873  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.448120  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.448347  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.448519  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.448661  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.448833  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.449040  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.449053  762482 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:19:39.556910  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:19:39.556954  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.557281  762482 buildroot.go:166] provisioning hostname "ha-344518"
	I0729 20:19:39.557315  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.557528  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.560296  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.560652  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.560680  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.560865  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.561075  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.561215  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.561340  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.561498  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.561675  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.561686  762482 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518 && echo "ha-344518" | sudo tee /etc/hostname
	I0729 20:19:39.682863  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:19:39.682899  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.685791  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.686224  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.686252  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.686435  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.686630  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.686863  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.687132  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.687359  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.687585  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.687602  762482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:19:39.792936  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:19:39.792970  762482 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:19:39.793010  762482 buildroot.go:174] setting up certificates
	I0729 20:19:39.793020  762482 provision.go:84] configureAuth start
	I0729 20:19:39.793030  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.793319  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:19:39.796203  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.796591  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.796628  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.796739  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.799195  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.799707  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.799733  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.799884  762482 provision.go:143] copyHostCerts
	I0729 20:19:39.799941  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:19:39.799991  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:19:39.800008  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:19:39.800204  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:19:39.800337  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:19:39.800371  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:19:39.800382  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:19:39.800425  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:19:39.800485  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:19:39.800509  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:19:39.800517  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:19:39.800551  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:19:39.800616  762482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518 san=[127.0.0.1 192.168.39.238 ha-344518 localhost minikube]
	I0729 20:19:39.998916  762482 provision.go:177] copyRemoteCerts
	I0729 20:19:39.999008  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:19:39.999046  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:40.002019  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.002486  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:40.002516  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.002762  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:40.003013  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.003162  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:40.003293  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:19:40.086393  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:19:40.086462  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:19:40.111405  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:19:40.111509  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:19:40.134834  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:19:40.134924  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 20:19:40.158251  762482 provision.go:87] duration metric: took 365.212503ms to configureAuth
	I0729 20:19:40.158286  762482 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:19:40.158528  762482 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:19:40.158613  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:40.160989  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.161368  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:40.161395  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.161653  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:40.161891  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.162084  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.162220  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:40.162427  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:40.162592  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:40.162605  762482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:21:11.038553  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:21:11.038583  762482 machine.go:97] duration metric: took 1m31.595004592s to provisionDockerMachine
	I0729 20:21:11.038596  762482 start.go:293] postStartSetup for "ha-344518" (driver="kvm2")
	I0729 20:21:11.038609  762482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:21:11.038652  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.039094  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:21:11.039126  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.042368  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.042798  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.042821  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.043073  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.043281  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.043448  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.043569  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.126877  762482 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:21:11.130842  762482 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:21:11.130865  762482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:21:11.130933  762482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:21:11.131031  762482 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:21:11.131051  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:21:11.131154  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:21:11.140934  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:21:11.164513  762482 start.go:296] duration metric: took 125.901681ms for postStartSetup
	I0729 20:21:11.164567  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.164866  762482 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 20:21:11.164898  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.167772  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.168227  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.168252  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.168407  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.168678  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.168852  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.169002  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	W0729 20:21:11.250070  762482 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 20:21:11.250106  762482 fix.go:56] duration metric: took 1m31.830020604s for fixHost
	I0729 20:21:11.250135  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.253222  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.253670  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.253699  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.253863  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.254082  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.254243  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.254409  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.254596  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:21:11.254795  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:21:11.254809  762482 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:21:11.356735  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722284471.314847115
	
	I0729 20:21:11.356758  762482 fix.go:216] guest clock: 1722284471.314847115
	I0729 20:21:11.356768  762482 fix.go:229] Guest: 2024-07-29 20:21:11.314847115 +0000 UTC Remote: 2024-07-29 20:21:11.250115186 +0000 UTC m=+91.960846804 (delta=64.731929ms)
	I0729 20:21:11.356820  762482 fix.go:200] guest clock delta is within tolerance: 64.731929ms
	I0729 20:21:11.356830  762482 start.go:83] releasing machines lock for "ha-344518", held for 1m31.936761283s
	I0729 20:21:11.356861  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.357169  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:21:11.359989  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.360441  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.360471  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.360656  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361209  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361397  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361498  762482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:21:11.361547  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.361675  762482 ssh_runner.go:195] Run: cat /version.json
	I0729 20:21:11.361702  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.364232  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364323  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364683  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.364709  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364736  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.364764  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364888  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.365017  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.365084  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.365158  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.365229  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.365304  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.365393  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.365449  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.470373  762482 ssh_runner.go:195] Run: systemctl --version
	I0729 20:21:11.476161  762482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:21:11.635605  762482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:21:11.642978  762482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:21:11.643057  762482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:21:11.652341  762482 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 20:21:11.652363  762482 start.go:495] detecting cgroup driver to use...
	I0729 20:21:11.652444  762482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:21:11.669843  762482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:21:11.684206  762482 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:21:11.684321  762482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:21:11.697442  762482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:21:11.710557  762482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:21:11.852585  762482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:21:12.006900  762482 docker.go:232] disabling docker service ...
	I0729 20:21:12.006976  762482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:21:12.023767  762482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:21:12.036385  762482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:21:12.182468  762482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:21:12.331276  762482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:21:12.344361  762482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:21:12.361855  762482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:21:12.361938  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.371774  762482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:21:12.371838  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.381257  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.390988  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.400810  762482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:21:12.411065  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.421009  762482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.431791  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.441246  762482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:21:12.450184  762482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:21:12.459022  762482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:21:12.592451  762482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:21:17.974217  762482 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.381434589s)
	I0729 20:21:17.974317  762482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:21:17.974413  762482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:21:17.980046  762482 start.go:563] Will wait 60s for crictl version
	I0729 20:21:17.980110  762482 ssh_runner.go:195] Run: which crictl
	I0729 20:21:17.983800  762482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:21:18.024115  762482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:21:18.024213  762482 ssh_runner.go:195] Run: crio --version
	I0729 20:21:18.052211  762482 ssh_runner.go:195] Run: crio --version
	I0729 20:21:18.085384  762482 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:21:18.086779  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:21:18.089632  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:18.090044  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:18.090073  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:18.090330  762482 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:21:18.095050  762482 kubeadm.go:883] updating cluster {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:21:18.095205  762482 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:21:18.095246  762482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:21:18.139778  762482 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:21:18.139811  762482 crio.go:433] Images already preloaded, skipping extraction
	I0729 20:21:18.139866  762482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:21:18.171773  762482 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:21:18.171813  762482 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:21:18.171827  762482 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.3 crio true true} ...
	I0729 20:21:18.171974  762482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:21:18.172076  762482 ssh_runner.go:195] Run: crio config
	I0729 20:21:18.219780  762482 cni.go:84] Creating CNI manager for ""
	I0729 20:21:18.219805  762482 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 20:21:18.219821  762482 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:21:18.219851  762482 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344518 NodeName:ha-344518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:21:18.220015  762482 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:21:18.220057  762482 kube-vip.go:115] generating kube-vip config ...
	I0729 20:21:18.220119  762482 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:21:18.230986  762482 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:21:18.231115  762482 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:21:18.231178  762482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:21:18.240550  762482 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:21:18.240617  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 20:21:18.249593  762482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 20:21:18.265334  762482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:21:18.280224  762482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 20:21:18.295551  762482 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:21:18.311581  762482 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:21:18.315365  762482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:21:18.458156  762482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:21:18.483712  762482 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.238
	I0729 20:21:18.483742  762482 certs.go:194] generating shared ca certs ...
	I0729 20:21:18.483775  762482 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.483997  762482 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:21:18.484094  762482 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:21:18.484113  762482 certs.go:256] generating profile certs ...
	I0729 20:21:18.484246  762482 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:21:18.484279  762482 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68
	I0729 20:21:18.484296  762482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.53 192.168.39.254]
	I0729 20:21:18.619358  762482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 ...
	I0729 20:21:18.619398  762482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68: {Name:mkd34a221960939dcd8a99abb5e8f25076f38c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.619593  762482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68 ...
	I0729 20:21:18.619606  762482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68: {Name:mk8039e4c36f36c5da11f7adf9b8bbc5fb38ef2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.619682  762482 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:21:18.619842  762482 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:21:18.619985  762482 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:21:18.620002  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:21:18.620015  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:21:18.620027  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:21:18.620066  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:21:18.620085  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:21:18.620110  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:21:18.620131  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:21:18.620149  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:21:18.620216  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:21:18.620251  762482 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:21:18.620261  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:21:18.620283  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:21:18.620311  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:21:18.620335  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:21:18.620374  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:21:18.620402  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.620416  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.620428  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.621049  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:21:18.645035  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:21:18.667132  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:21:18.692169  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:21:18.714701  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 20:21:18.740573  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:21:18.765567  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:21:18.790733  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:21:18.816161  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:21:18.840834  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:21:18.865904  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:21:18.891428  762482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:21:18.909205  762482 ssh_runner.go:195] Run: openssl version
	I0729 20:21:18.915521  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:21:18.926126  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.930619  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.930674  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.936355  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:21:18.945676  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:21:18.955672  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.959827  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.959936  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.965137  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:21:18.973911  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:21:18.983957  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.988251  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.988310  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.993596  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:21:19.002786  762482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:21:19.007180  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:21:19.012528  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:21:19.017944  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:21:19.023055  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:21:19.028572  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:21:19.033785  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:21:19.039103  762482 kubeadm.go:392] StartCluster: {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:21:19.039274  762482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:21:19.039330  762482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:21:19.076562  762482 cri.go:89] found id: "2be21b3762e2b8f6207f4c6b63f22b53b15d2459ce4818a52d71a0219a66b4aa"
	I0729 20:21:19.076588  762482 cri.go:89] found id: "3c06c4829c7e53e9437b7427b8b47e0ba76a5f614452c9d673ed69fedae6922b"
	I0729 20:21:19.076592  762482 cri.go:89] found id: "ff31897b9a6449fdc1cf23b389b94e26797efbc68df8d8104de119eb5c9dd498"
	I0729 20:21:19.076595  762482 cri.go:89] found id: "7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c"
	I0729 20:21:19.076598  762482 cri.go:89] found id: "150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea"
	I0729 20:21:19.076601  762482 cri.go:89] found id: "4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a"
	I0729 20:21:19.076603  762482 cri.go:89] found id: "594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f"
	I0729 20:21:19.076606  762482 cri.go:89] found id: "d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454"
	I0729 20:21:19.076608  762482 cri.go:89] found id: "a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb"
	I0729 20:21:19.076615  762482 cri.go:89] found id: "1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be"
	I0729 20:21:19.076622  762482 cri.go:89] found id: "d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7"
	I0729 20:21:19.076626  762482 cri.go:89] found id: "3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00"
	I0729 20:21:19.076630  762482 cri.go:89] found id: "a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50"
	I0729 20:21:19.076636  762482 cri.go:89] found id: ""
	I0729 20:21:19.076689  762482 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.298868103Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284622298841292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f733bbad-1a32-44a6-bb85-34eeae432478 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.299452394Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=53fa0345-a779-4b22-94d0-d0a699f46045 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.299508441Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=53fa0345-a779-4b22-94d0-d0a699f46045 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.299965926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=53fa0345-a779-4b22-94d0-d0a699f46045 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.346990608Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3baa6081-a729-4648-b34d-1b4b875d8a8c name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.347110702Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3baa6081-a729-4648-b34d-1b4b875d8a8c name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.348793101Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74f31ec4-d3be-4e2e-9c1e-d3799d8acfa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.349549967Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284622349510775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74f31ec4-d3be-4e2e-9c1e-d3799d8acfa9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.350135706Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d608f5ea-f15b-4fa8-9890-3c098947c3c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.350242652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d608f5ea-f15b-4fa8-9890-3c098947c3c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.350933703Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d608f5ea-f15b-4fa8-9890-3c098947c3c8 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.398391291Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9737306f-51ca-4b88-8d83-fe6f136959db name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.398463603Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9737306f-51ca-4b88-8d83-fe6f136959db name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.399764207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5a667c58-d42b-4f43-86ec-12a9e42c0028 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.400350007Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284622400310112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a667c58-d42b-4f43-86ec-12a9e42c0028 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.401074136Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6a029724-636a-4e8b-bd6b-e774d0f3f3aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.401156375Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6a029724-636a-4e8b-bd6b-e774d0f3f3aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.401756001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6a029724-636a-4e8b-bd6b-e774d0f3f3aa name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.441712117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=676a9a05-d06e-4269-a0e0-8d8da4a3bfb8 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.441785168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=676a9a05-d06e-4269-a0e0-8d8da4a3bfb8 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.442945705Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a9b0a18-469f-4d28-b7e6-1926589a67f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.443497210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284622443473559,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a9b0a18-469f-4d28-b7e6-1926589a67f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.444036326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ccfd99c-c20f-4d30-9f34-b6a65cfdf2b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.444104730Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ccfd99c-c20f-4d30-9f34-b6a65cfdf2b1 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:23:42 ha-344518 crio[3752]: time="2024-07-29 20:23:42.444528258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ccfd99c-c20f-4d30-9f34-b6a65cfdf2b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	d2991c901db3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      46 seconds ago       Running             storage-provisioner       4                   8577d7c915c6b       storage-provisioner
	6269dfd02a3c7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   2                   00a6fc7aabd3b       kube-controller-manager-ha-344518
	c18f890b02c28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   8577d7c915c6b       storage-provisioner
	898a9f8b1999b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            3                   271d8a2d81427       kube-apiserver-ha-344518
	42f63c809b960       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   e2ddd8e9a6098       busybox-fc5497c4f-fp24v
	d82a77f6b5639       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      2 minutes ago        Running             kube-vip                  0                   71dacd5a14cd2       kube-vip-ha-344518
	5bad9db5d0866       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      2 minutes ago        Running             kube-proxy                1                   676f5bb2dc61a       kube-proxy-fh6rg
	f78626abb7e0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   052dda1765b54       coredns-7db6d8ff4d-xpkp6
	80e938336fd3e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      2 minutes ago        Running             kindnet-cni               1                   b7e0a5882dba4       kindnet-nl4kz
	c89a9f7056c1b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      2 minutes ago        Running             kube-scheduler            1                   447f30df39f92       kube-scheduler-ha-344518
	5882d9060c0d6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      2 minutes ago        Exited              kube-apiserver            2                   271d8a2d81427       kube-apiserver-ha-344518
	cab86b8020816       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      2 minutes ago        Exited              kube-controller-manager   1                   00a6fc7aabd3b       kube-controller-manager-ha-344518
	973ffc8ba5042       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      2 minutes ago        Running             etcd                      1                   bca03cf071a72       etcd-ha-344518
	8042b04ce3ea9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      2 minutes ago        Running             coredns                   1                   6fbba4fa017b1       coredns-7db6d8ff4d-wzmc5
	962f37271e54d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   4fd5554044288       busybox-fc5497c4f-fp24v
	7bed7bb792810       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   e6598d2da30cd       coredns-7db6d8ff4d-xpkp6
	4d27dc2036f3c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago       Exited              coredns                   0                   ffb2234aef191       coredns-7db6d8ff4d-wzmc5
	594577e4d332f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    13 minutes ago       Exited              kindnet-cni               0                   aa3121e476fc2       kindnet-nl4kz
	d79e4f49251f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      13 minutes ago       Exited              kube-proxy                0                   08408a18bb915       kube-proxy-fh6rg
	1121b90510c21       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      14 minutes ago       Exited              kube-scheduler            0                   b61bed291d877       kube-scheduler-ha-344518
	a0e14d313861e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      14 minutes ago       Exited              etcd                      0                   259cc56efacfd       etcd-ha-344518
	
	
	==> coredns [4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a] <==
	[INFO] 10.244.2.2:35340 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003153904s
	[INFO] 10.244.2.2:54596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140336s
	[INFO] 10.244.0.4:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949954s
	[INFO] 10.244.0.4:39933 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113699s
	[INFO] 10.244.0.4:54725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150049s
	[INFO] 10.244.1.2:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115875s
	[INFO] 10.244.1.2:54023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742745s
	[INFO] 10.244.1.2:51538 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140285s
	[INFO] 10.244.1.2:56008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088578s
	[INFO] 10.244.2.2:44895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095319s
	[INFO] 10.244.2.2:40784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167082s
	[INFO] 10.244.0.4:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120067s
	[INFO] 10.244.0.4:39840 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111609s
	[INFO] 10.244.0.4:38416 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058031s
	[INFO] 10.244.1.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176608s
	[INFO] 10.244.2.2:48597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139446s
	[INFO] 10.244.2.2:51477 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106731s
	[INFO] 10.244.0.4:47399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109762s
	[INFO] 10.244.0.4:48496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126806s
	[INFO] 10.244.1.2:33090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183559s
	[INFO] 10.244.1.2:58207 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095513s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1898&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1871&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c] <==
	[INFO] 10.244.2.2:40109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117861s
	[INFO] 10.244.0.4:43889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020394s
	[INFO] 10.244.0.4:34685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072181s
	[INFO] 10.244.0.4:59825 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335615s
	[INFO] 10.244.0.4:51461 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176686s
	[INFO] 10.244.0.4:35140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051586s
	[INFO] 10.244.1.2:54871 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115274s
	[INFO] 10.244.1.2:51590 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001521426s
	[INFO] 10.244.1.2:60677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011059s
	[INFO] 10.244.1.2:48005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106929s
	[INFO] 10.244.2.2:58992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110446s
	[INFO] 10.244.2.2:41728 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108732s
	[INFO] 10.244.0.4:38164 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104442s
	[INFO] 10.244.1.2:47258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118558s
	[INFO] 10.244.1.2:38089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092315s
	[INFO] 10.244.1.2:33841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075348s
	[INFO] 10.244.2.2:33549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013334s
	[INFO] 10.244.2.2:53967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203235s
	[INFO] 10.244.0.4:37211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128698s
	[INFO] 10.244.0.4:50842 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112886s
	[INFO] 10.244.1.2:51560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281444s
	[INFO] 10.244.1.2:48121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1931&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0] <==
	Trace[1378146420]: [10.001558209s] [10.001558209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2023310062]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:28.851) (total time: 10001ms):
	Trace[2023310062]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:21:38.853)
	Trace[2023310062]: [10.001954743s] [10.001954743s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1494062392]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:31.980) (total time: 10001ms):
	Trace[1494062392]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:21:41.981)
	Trace[1494062392]: [10.001660843s] [10.001660843s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52826->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52826->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f78626abb7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1287796767]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:37.560) (total time: 10200ms):
	Trace[1287796767]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer 10200ms (20:21:47.760)
	Trace[1287796767]: [10.200304192s] [10.200304192s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2007934946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:37.475) (total time: 10285ms):
	Trace[2007934946]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer 10285ms (20:21:47.761)
	Trace[2007934946]: [10.285863549s] [10.285863549s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-344518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-344518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58926cc84a1545f2aed136a3e761f2be
	  System UUID:                58926cc8-4a15-45f2-aed1-36a3e761f2be
	  Boot ID:                    53511801-74aa-43cb-9108-0a1fffab4f32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fp24v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-7db6d8ff4d-wzmc5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-xpkp6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-344518                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-nl4kz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-344518             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-344518    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-fh6rg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-344518             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-344518                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 94s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-344518 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-344518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-344518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-344518 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Warning  ContainerGCFailed        2m55s (x2 over 3m55s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           89s                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           81s                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           23s                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	
	
	Name:               ha-344518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:23:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-344518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e624f7b4f7644519a6f4690f28614c0
	  System UUID:                9e624f7b-4f76-4451-9a6f-4690f28614c0
	  Boot ID:                    4306b075-ea2d-4345-8b6c-8e5f4f92efe0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xn8rr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-344518-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-jj2b4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-344518-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-344518-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-nfxp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-344518-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-344518-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 92s                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           12m                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeNotReady             8m54s              node-controller  Node ha-344518-m02 status is now: NodeNotReady
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)    kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           89s                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           81s                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           23s                node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	
	
	Name:               ha-344518-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_12_28_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:12:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:23:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:23:20 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:23:20 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:23:20 +0000   Mon, 29 Jul 2024 20:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:23:20 +0000   Mon, 29 Jul 2024 20:12:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    ha-344518-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 41330caf582148fd80914bd6e0732453
	  System UUID:                41330caf-5821-48fd-8091-4bd6e0732453
	  Boot ID:                    98b50c62-6f4f-4be6-9056-1ecd291ef12d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22rcc                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-344518-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-6qbz5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-344518-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-344518-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-s8wn5                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-344518-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-344518-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 35s                kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node ha-344518-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node ha-344518-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11m                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal   RegisteredNode           89s                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal   RegisteredNode           81s                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  52s                kubelet          Node ha-344518-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s                kubelet          Node ha-344518-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s                kubelet          Node ha-344518-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 52s                kubelet          Node ha-344518-m03 has been rebooted, boot id: 98b50c62-6f4f-4be6-9056-1ecd291ef12d
	  Normal   RegisteredNode           23s                node-controller  Node ha-344518-m03 event: Registered Node ha-344518-m03 in Controller
	
	
	Name:               ha-344518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_13_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:13:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:23:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:23:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:23:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:23:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:23:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-344518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a26135ecab4ebcafa4c947c9d6f013
	  System UUID:                d8a26135-ecab-4ebc-afa4-c947c9d6f013
	  Boot ID:                    ee682693-6ce8-4022-a093-1d884cc6af51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-4m6xw       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-947zc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   RegisteredNode           10m                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeReady                9m53s              kubelet          Node ha-344518-m04 status is now: NodeReady
	  Normal   RegisteredNode           90s                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   RegisteredNode           82s                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeNotReady             50s                node-controller  Node ha-344518-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           24s                node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9s (x2 over 9s)    kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x2 over 9s)    kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x2 over 9s)    kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9s                 kubelet          Node ha-344518-m04 has been rebooted, boot id: ee682693-6ce8-4022-a093-1d884cc6af51
	  Normal   NodeReady                9s                 kubelet          Node ha-344518-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.281405] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.054666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050707] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.158935] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.126079] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.245623] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.820743] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.869843] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.068841] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.242210] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.084855] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 20:10] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.358609] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 20:11] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 20:21] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.144300] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[  +0.182848] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.146719] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.269424] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	[  +5.858543] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.084685] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.631358] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.253877] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.069821] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 20:22] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c] <==
	{"level":"warn","ts":"2024-07-29T20:22:55.593267Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"57cb2df333d7b24","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:22:55.593429Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"57cb2df333d7b24","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T20:22:55.780602Z","caller":"traceutil/trace.go:171","msg":"trace[1923549962] transaction","detail":"{read_only:false; response_revision:2354; number_of_response:1; }","duration":"116.60728ms","start":"2024-07-29T20:22:55.663947Z","end":"2024-07-29T20:22:55.780554Z","steps":["trace[1923549962] 'process raft request'  (duration: 116.391826ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T20:22:56.108461Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:22:56.109709Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:22:59.596184Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.39.53:2380/version","remote-member-id":"57cb2df333d7b24","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:22:59.596283Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"57cb2df333d7b24","error":"Get \"https://192.168.39.53:2380/version\": dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:23:01.108723Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:23:01.109835Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:23:02.341Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.948346ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10056697397770037315 > lease_revoke:<id:0b90910027ada75b>","response":"size:28"}
	{"level":"warn","ts":"2024-07-29T20:23:02.341528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.75425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-344518-m03\" ","response":"range_response_count:1 size:6884"}
	{"level":"info","ts":"2024-07-29T20:23:02.341653Z","caller":"traceutil/trace.go:171","msg":"trace[1507753701] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-344518-m03; range_end:; response_count:1; response_revision:2380; }","duration":"118.916334ms","start":"2024-07-29T20:23:02.222717Z","end":"2024-07-29T20:23:02.341633Z","steps":["trace[1507753701] 'agreement among raft nodes before linearized reading'  (duration: 17.22727ms)","trace[1507753701] 'range keys from in-memory index tree'  (duration: 101.497394ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T20:23:02.621155Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:02.628841Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:02.646571Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:02.648491Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fff3906243738b90","to":"57cb2df333d7b24","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-29T20:23:02.648668Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:02.648596Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fff3906243738b90","to":"57cb2df333d7b24","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T20:23:02.648866Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:02.665844Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.53:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T20:23:06.109759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:23:06.110693Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T20:23:12.709689Z","caller":"traceutil/trace.go:171","msg":"trace[1507592919] transaction","detail":"{read_only:false; response_revision:2425; number_of_response:1; }","duration":"125.432558ms","start":"2024-07-29T20:23:12.584242Z","end":"2024-07-29T20:23:12.709674Z","steps":["trace[1507592919] 'process raft request'  (duration: 125.334901ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:23:37.516579Z","caller":"traceutil/trace.go:171","msg":"trace[364444095] transaction","detail":"{read_only:false; response_revision:2518; number_of_response:1; }","duration":"104.731185ms","start":"2024-07-29T20:23:37.411808Z","end":"2024-07-29T20:23:37.516539Z","steps":["trace[364444095] 'process raft request'  (duration: 104.594979ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:23:41.293401Z","caller":"traceutil/trace.go:171","msg":"trace[1439658240] transaction","detail":"{read_only:false; response_revision:2532; number_of_response:1; }","duration":"123.527902ms","start":"2024-07-29T20:23:41.169846Z","end":"2024-07-29T20:23:41.293374Z","steps":["trace[1439658240] 'process raft request'  (duration: 122.881491ms)"],"step_count":1}
	
	
	==> etcd [a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50] <==
	{"level":"warn","ts":"2024-07-29T20:19:40.299427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.779446342s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T20:19:40.328477Z","caller":"traceutil/trace.go:171","msg":"trace[1283689381] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"7.808532729s","start":"2024-07-29T20:19:32.519937Z","end":"2024-07-29T20:19:40.32847Z","steps":["trace[1283689381] 'agreement among raft nodes before linearized reading'  (duration: 7.779453255s)"],"step_count":1}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T20:19:40.323908Z","caller":"traceutil/trace.go:171","msg":"trace[2141952898] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-bw7ug3v3srytgg3i4du4ulxmvu; range_end:; }","duration":"7.813960958s","start":"2024-07-29T20:19:32.509127Z","end":"2024-07-29T20:19:40.323088Z","steps":["trace[2141952898] 'agreement among raft nodes before linearized reading'  (duration: 7.791389958s)"],"step_count":1}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T20:19:40.323186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T20:19:31.448835Z","time spent":"8.874339609s","remote":"127.0.0.1:42658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T20:19:40.365497Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fff3906243738b90","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T20:19:40.365683Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365887Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.36597Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366057Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366098Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366108Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366117Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366154Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366245Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366293Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366345Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.369898Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-29T20:19:40.370094Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-29T20:19:40.370169Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-344518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"]}
	
	
	==> kernel <==
	 20:23:43 up 14 min,  0 users,  load average: 0.30, 0.48, 0.28
	Linux ha-344518 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f] <==
	I0729 20:19:16.996547       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:16.996553       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:16.996678       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:16.996730       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:16.996807       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:16.996826       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:19:27.005161       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:19:27.005272       1 main.go:299] handling current node
	I0729 20:19:27.005297       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:27.005303       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:27.005432       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:27.005484       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:27.005565       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:27.005584       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	E0729 20:19:34.014870       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1876&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0729 20:19:36.996555       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:19:36.996595       1 main.go:299] handling current node
	I0729 20:19:36.996633       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:36.996643       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:36.996799       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:36.996805       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:36.996874       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:36.996880       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	W0729 20:19:37.086724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1876": dial tcp 10.96.0.1:443: connect: no route to host
	E0729 20:19:37.087179       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1876": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec] <==
	I0729 20:23:06.624753       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:23:16.626265       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:23:16.626378       1 main.go:299] handling current node
	I0729 20:23:16.626407       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:23:16.626428       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:23:16.626578       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:23:16.626607       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:23:16.626682       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:23:16.626700       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:23:26.618062       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:23:26.618387       1 main.go:299] handling current node
	I0729 20:23:26.618416       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:23:26.618425       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:23:26.618689       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:23:26.618725       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:23:26.618836       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:23:26.618862       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:23:36.617289       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:23:36.617340       1 main.go:299] handling current node
	I0729 20:23:36.617355       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:23:36.617368       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:23:36.617578       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:23:36.617602       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:23:36.617660       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:23:36.617664       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746] <==
	I0729 20:21:26.016306       1 options.go:221] external host was not specified, using 192.168.39.238
	I0729 20:21:26.017370       1 server.go:148] Version: v1.30.3
	I0729 20:21:26.017464       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:21:26.725787       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 20:21:26.743248       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:21:26.749867       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 20:21:26.749901       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 20:21:26.750076       1 instance.go:299] Using reconciler: lease
	W0729 20:21:46.725048       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 20:21:46.726523       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 20:21:46.753626       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889] <==
	I0729 20:22:05.637578       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 20:22:05.615133       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 20:22:05.714879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 20:22:05.719625       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 20:22:05.726059       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 20:22:05.715982       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 20:22:05.737567       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.53]
	I0729 20:22:05.737689       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 20:22:05.737758       1 aggregator.go:165] initial CRD sync complete...
	I0729 20:22:05.737769       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 20:22:05.737774       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 20:22:05.737778       1 cache.go:39] Caches are synced for autoregister controller
	I0729 20:22:05.737887       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 20:22:05.772637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 20:22:05.779026       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:22:05.779140       1 policy_source.go:224] refreshing policies
	I0729 20:22:05.815704       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 20:22:05.815805       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 20:22:05.815847       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 20:22:05.841060       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 20:22:05.849150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 20:22:05.855373       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 20:22:06.621994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 20:22:06.971314       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.238 192.168.39.53]
	W0729 20:22:16.971771       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.238]
	
	
	==> kube-controller-manager [6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890] <==
	I0729 20:22:21.054388       1 shared_informer.go:320] Caches are synced for expand
	I0729 20:22:21.063284       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 20:22:21.076783       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 20:22:21.076948       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.182µs"
	I0729 20:22:21.077097       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="118.221µs"
	I0729 20:22:21.094628       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 20:22:21.113564       1 shared_informer.go:320] Caches are synced for disruption
	I0729 20:22:21.119322       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 20:22:21.125376       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 20:22:21.221630       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 20:22:21.250881       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 20:22:21.271603       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 20:22:21.700819       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 20:22:21.721757       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 20:22:21.721841       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 20:22:27.543746       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-5ldlj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-5ldlj\": the object has been modified; please apply your changes to the latest version and try again"
	I0729 20:22:27.544141       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"994fc84c-a038-4b43-8718-6e948dc0b8b6", APIVersion:"v1", ResourceVersion:"239", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-5ldlj EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-5ldlj": the object has been modified; please apply your changes to the latest version and try again
	I0729 20:22:27.608966       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="101.095159ms"
	I0729 20:22:27.623141       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.090919ms"
	I0729 20:22:27.623501       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.529µs"
	I0729 20:22:51.174666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.752927ms"
	I0729 20:22:51.176371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="73.238µs"
	I0729 20:23:10.648568       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.034749ms"
	I0729 20:23:10.648794       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.037µs"
	I0729 20:23:34.230078       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-344518-m04"
	
	
	==> kube-controller-manager [cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8] <==
	I0729 20:21:26.824746       1 serving.go:380] Generated self-signed cert in-memory
	I0729 20:21:27.272846       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 20:21:27.272885       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:21:27.274473       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 20:21:27.274592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 20:21:27.274699       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 20:21:27.274823       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 20:21:47.760018       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.238:8443/healthz\": dial tcp 192.168.39.238:8443: connect: connection refused"
	
	
	==> kube-proxy [5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c] <==
	I0729 20:21:26.954833       1 server_linux.go:69] "Using iptables proxy"
	E0729 20:21:28.318723       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:31.391348       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:34.463817       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:40.608449       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:49.822754       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 20:22:08.041705       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0729 20:22:08.075776       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:22:08.075880       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:22:08.075921       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:22:08.078299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:22:08.078648       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:22:08.078955       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:22:08.081633       1 config.go:192] "Starting service config controller"
	I0729 20:22:08.081690       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:22:08.081740       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:22:08.081760       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:22:08.083616       1 config.go:319] "Starting node config controller"
	I0729 20:22:08.083637       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:22:08.182163       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:22:08.182168       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:22:08.183758       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454] <==
	E0729 20:18:30.143655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.215944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.216286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.215829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.359899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.359997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.360516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.360595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.360634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.360690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.575761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.575888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.575998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.575915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.576117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.576233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:07.007147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:07.007282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:07.007420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:07.007475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:13.151371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:13.151692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be] <==
	E0729 20:19:36.004049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:19:36.005041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 20:19:36.005064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 20:19:36.168958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 20:19:36.169057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 20:19:36.271753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:19:36.271856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:19:36.347442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:19:36.347488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:19:36.571475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:19:36.571571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 20:19:36.625585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:19:36.625629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:19:36.932558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 20:19:36.932594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 20:19:39.082794       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:19:39.082845       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:19:39.467510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 20:19:39.467556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 20:19:39.901570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:19:39.901682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:19:40.216643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:19:40.216680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 20:19:40.269585       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 20:19:40.269766       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3] <==
	W0729 20:21:57.973291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:21:57.973341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:21:58.359729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:21:58.359787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:22:01.001829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:22:01.002015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:22:05.667403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.667455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.680616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:22:05.680668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:22:05.680734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.680766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.680824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:22:05.680852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:22:05.680640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.680979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.704964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:22:05.705105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:22:05.708608       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:22:05.708729       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:22:05.715613       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:22:05.715743       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:22:05.716006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:22:05.716047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 20:22:22.969722       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:22:04 ha-344518 kubelet[1384]: E0729 20:22:04.172434    1384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e8bd9d2-8adf-47de-8e32-05d64002a631)\"" pod="kube-system/storage-provisioner" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631"
	Jul 29 20:22:05 ha-344518 kubelet[1384]: W0729 20:22:05.182689    1384 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1876": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 20:22:05 ha-344518 kubelet[1384]: E0729 20:22:05.182991    1384 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-root-ca.crt&resourceVersion=1876": dial tcp 192.168.39.254:8443: connect: no route to host
	Jul 29 20:22:05 ha-344518 kubelet[1384]: E0729 20:22:05.182998    1384 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-344518\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 20:22:05 ha-344518 kubelet[1384]: E0729 20:22:05.183057    1384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-344518?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Jul 29 20:22:05 ha-344518 kubelet[1384]: I0729 20:22:05.183150    1384 status_manager.go:853] "Failed to get status for pod" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 20:22:08 ha-344518 kubelet[1384]: E0729 20:22:08.254521    1384 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ha-344518\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 20:22:08 ha-344518 kubelet[1384]: I0729 20:22:08.254510    1384 status_manager.go:853] "Failed to get status for pod" podUID="cd59779c0bf07be17ee08a6f723c6a83" pod="kube-system/kube-controller-manager-ha-344518" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Jul 29 20:22:09 ha-344518 kubelet[1384]: I0729 20:22:09.652790    1384 scope.go:117] "RemoveContainer" containerID="cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8"
	Jul 29 20:22:19 ha-344518 kubelet[1384]: I0729 20:22:19.651919    1384 scope.go:117] "RemoveContainer" containerID="c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1"
	Jul 29 20:22:19 ha-344518 kubelet[1384]: E0729 20:22:19.652238    1384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e8bd9d2-8adf-47de-8e32-05d64002a631)\"" pod="kube-system/storage-provisioner" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631"
	Jul 29 20:22:31 ha-344518 kubelet[1384]: I0729 20:22:31.922691    1384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-fp24v" podStartSLOduration=576.453442703 podStartE2EDuration="9m38.922644505s" podCreationTimestamp="2024-07-29 20:12:53 +0000 UTC" firstStartedPulling="2024-07-29 20:12:55.023331104 +0000 UTC m=+187.529291967" lastFinishedPulling="2024-07-29 20:12:57.492532901 +0000 UTC m=+189.998493769" observedRunningTime="2024-07-29 20:12:58.393608588 +0000 UTC m=+190.899569472" watchObservedRunningTime="2024-07-29 20:22:31.922644505 +0000 UTC m=+764.428605383"
	Jul 29 20:22:32 ha-344518 kubelet[1384]: I0729 20:22:32.652466    1384 scope.go:117] "RemoveContainer" containerID="c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1"
	Jul 29 20:22:32 ha-344518 kubelet[1384]: E0729 20:22:32.653263    1384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e8bd9d2-8adf-47de-8e32-05d64002a631)\"" pod="kube-system/storage-provisioner" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631"
	Jul 29 20:22:43 ha-344518 kubelet[1384]: I0729 20:22:43.652178    1384 scope.go:117] "RemoveContainer" containerID="c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1"
	Jul 29 20:22:43 ha-344518 kubelet[1384]: E0729 20:22:43.652454    1384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e8bd9d2-8adf-47de-8e32-05d64002a631)\"" pod="kube-system/storage-provisioner" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631"
	Jul 29 20:22:47 ha-344518 kubelet[1384]: E0729 20:22:47.711917    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:22:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:22:54 ha-344518 kubelet[1384]: I0729 20:22:54.652101    1384 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344518" podUID="140d2a2f-c461-421e-9b01-a5e6d7f2b9f8"
	Jul 29 20:22:54 ha-344518 kubelet[1384]: I0729 20:22:54.669675    1384 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-344518"
	Jul 29 20:22:55 ha-344518 kubelet[1384]: I0729 20:22:55.652376    1384 scope.go:117] "RemoveContainer" containerID="c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1"
	Jul 29 20:22:56 ha-344518 kubelet[1384]: I0729 20:22:56.487611    1384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-344518" podStartSLOduration=2.487579776 podStartE2EDuration="2.487579776s" podCreationTimestamp="2024-07-29 20:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 20:22:56.472300686 +0000 UTC m=+788.978261571" watchObservedRunningTime="2024-07-29 20:22:56.487579776 +0000 UTC m=+788.993540660"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 20:23:41.996438  763871 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19344-733808/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344518 -n ha-344518
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (366.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 stop -v=7 --alsologtostderr: exit status 82 (2m0.483607621s)

                                                
                                                
-- stdout --
	* Stopping node "ha-344518-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:24:01.613033  764281 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:24:01.613321  764281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:24:01.613331  764281 out.go:304] Setting ErrFile to fd 2...
	I0729 20:24:01.613336  764281 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:24:01.613549  764281 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:24:01.613788  764281 out.go:298] Setting JSON to false
	I0729 20:24:01.613877  764281 mustload.go:65] Loading cluster: ha-344518
	I0729 20:24:01.614335  764281 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:24:01.614469  764281 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:24:01.614684  764281 mustload.go:65] Loading cluster: ha-344518
	I0729 20:24:01.614815  764281 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:24:01.614852  764281 stop.go:39] StopHost: ha-344518-m04
	I0729 20:24:01.615237  764281 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:24:01.615295  764281 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:24:01.631486  764281 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37519
	I0729 20:24:01.632047  764281 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:24:01.632649  764281 main.go:141] libmachine: Using API Version  1
	I0729 20:24:01.632675  764281 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:24:01.633029  764281 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:24:01.635599  764281 out.go:177] * Stopping node "ha-344518-m04"  ...
	I0729 20:24:01.636825  764281 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0729 20:24:01.636855  764281 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:24:01.637087  764281 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0729 20:24:01.637111  764281 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:24:01.639835  764281 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:24:01.640371  764281 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:23:28 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:24:01.640401  764281 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:24:01.640538  764281 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:24:01.640746  764281 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:24:01.640932  764281 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:24:01.641107  764281 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	I0729 20:24:01.725962  764281 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0729 20:24:01.778521  764281 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0729 20:24:01.830597  764281 main.go:141] libmachine: Stopping "ha-344518-m04"...
	I0729 20:24:01.830639  764281 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:24:01.832516  764281 main.go:141] libmachine: (ha-344518-m04) Calling .Stop
	I0729 20:24:01.836054  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 0/120
	I0729 20:24:02.837980  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 1/120
	I0729 20:24:03.839147  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 2/120
	I0729 20:24:04.840923  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 3/120
	I0729 20:24:05.842388  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 4/120
	I0729 20:24:06.844407  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 5/120
	I0729 20:24:07.846853  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 6/120
	I0729 20:24:08.848280  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 7/120
	I0729 20:24:09.850540  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 8/120
	I0729 20:24:10.852261  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 9/120
	I0729 20:24:11.854854  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 10/120
	I0729 20:24:12.856419  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 11/120
	I0729 20:24:13.858878  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 12/120
	I0729 20:24:14.860236  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 13/120
	I0729 20:24:15.862568  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 14/120
	I0729 20:24:16.864509  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 15/120
	I0729 20:24:17.865741  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 16/120
	I0729 20:24:18.867129  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 17/120
	I0729 20:24:19.868608  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 18/120
	I0729 20:24:20.870674  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 19/120
	I0729 20:24:21.872867  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 20/120
	I0729 20:24:22.874360  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 21/120
	I0729 20:24:23.875864  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 22/120
	I0729 20:24:24.877631  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 23/120
	I0729 20:24:25.879371  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 24/120
	I0729 20:24:26.881757  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 25/120
	I0729 20:24:27.883035  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 26/120
	I0729 20:24:28.884656  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 27/120
	I0729 20:24:29.886316  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 28/120
	I0729 20:24:30.888289  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 29/120
	I0729 20:24:31.890036  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 30/120
	I0729 20:24:32.891653  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 31/120
	I0729 20:24:33.892989  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 32/120
	I0729 20:24:34.894839  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 33/120
	I0729 20:24:35.896611  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 34/120
	I0729 20:24:36.898523  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 35/120
	I0729 20:24:37.901190  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 36/120
	I0729 20:24:38.902722  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 37/120
	I0729 20:24:39.904145  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 38/120
	I0729 20:24:40.905506  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 39/120
	I0729 20:24:41.907622  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 40/120
	I0729 20:24:42.909030  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 41/120
	I0729 20:24:43.910360  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 42/120
	I0729 20:24:44.911994  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 43/120
	I0729 20:24:45.913611  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 44/120
	I0729 20:24:46.915732  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 45/120
	I0729 20:24:47.917113  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 46/120
	I0729 20:24:48.918512  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 47/120
	I0729 20:24:49.919922  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 48/120
	I0729 20:24:50.921243  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 49/120
	I0729 20:24:51.923665  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 50/120
	I0729 20:24:52.925972  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 51/120
	I0729 20:24:53.927419  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 52/120
	I0729 20:24:54.928885  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 53/120
	I0729 20:24:55.930960  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 54/120
	I0729 20:24:56.932805  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 55/120
	I0729 20:24:57.934585  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 56/120
	I0729 20:24:58.935933  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 57/120
	I0729 20:24:59.937758  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 58/120
	I0729 20:25:00.939468  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 59/120
	I0729 20:25:01.941698  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 60/120
	I0729 20:25:02.943132  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 61/120
	I0729 20:25:03.944687  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 62/120
	I0729 20:25:04.946871  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 63/120
	I0729 20:25:05.949131  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 64/120
	I0729 20:25:06.951069  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 65/120
	I0729 20:25:07.952341  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 66/120
	I0729 20:25:08.954547  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 67/120
	I0729 20:25:09.956889  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 68/120
	I0729 20:25:10.958920  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 69/120
	I0729 20:25:11.960681  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 70/120
	I0729 20:25:12.962107  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 71/120
	I0729 20:25:13.963507  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 72/120
	I0729 20:25:14.964947  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 73/120
	I0729 20:25:15.966950  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 74/120
	I0729 20:25:16.969133  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 75/120
	I0729 20:25:17.970725  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 76/120
	I0729 20:25:18.972222  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 77/120
	I0729 20:25:19.973974  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 78/120
	I0729 20:25:20.975724  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 79/120
	I0729 20:25:21.977269  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 80/120
	I0729 20:25:22.978937  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 81/120
	I0729 20:25:23.980423  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 82/120
	I0729 20:25:24.982690  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 83/120
	I0729 20:25:25.984191  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 84/120
	I0729 20:25:26.986222  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 85/120
	I0729 20:25:27.987959  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 86/120
	I0729 20:25:28.989372  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 87/120
	I0729 20:25:29.990828  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 88/120
	I0729 20:25:30.992467  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 89/120
	I0729 20:25:31.994486  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 90/120
	I0729 20:25:32.996109  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 91/120
	I0729 20:25:33.997608  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 92/120
	I0729 20:25:34.998927  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 93/120
	I0729 20:25:36.000518  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 94/120
	I0729 20:25:37.002346  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 95/120
	I0729 20:25:38.003674  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 96/120
	I0729 20:25:39.005368  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 97/120
	I0729 20:25:40.007256  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 98/120
	I0729 20:25:41.008728  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 99/120
	I0729 20:25:42.010599  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 100/120
	I0729 20:25:43.011991  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 101/120
	I0729 20:25:44.013461  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 102/120
	I0729 20:25:45.014841  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 103/120
	I0729 20:25:46.016561  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 104/120
	I0729 20:25:47.018815  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 105/120
	I0729 20:25:48.020293  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 106/120
	I0729 20:25:49.022548  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 107/120
	I0729 20:25:50.024069  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 108/120
	I0729 20:25:51.025958  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 109/120
	I0729 20:25:52.028197  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 110/120
	I0729 20:25:53.029535  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 111/120
	I0729 20:25:54.031136  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 112/120
	I0729 20:25:55.032575  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 113/120
	I0729 20:25:56.034790  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 114/120
	I0729 20:25:57.036556  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 115/120
	I0729 20:25:58.038631  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 116/120
	I0729 20:25:59.040002  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 117/120
	I0729 20:26:00.041389  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 118/120
	I0729 20:26:01.043107  764281 main.go:141] libmachine: (ha-344518-m04) Waiting for machine to stop 119/120
	I0729 20:26:02.044363  764281 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0729 20:26:02.044426  764281 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0729 20:26:02.046278  764281 out.go:177] 
	W0729 20:26:02.047434  764281 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0729 20:26:02.047450  764281 out.go:239] * 
	* 
	W0729 20:26:02.050545  764281 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 20:26:02.051843  764281 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-344518 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr: exit status 3 (18.851692002s)

                                                
                                                
-- stdout --
	ha-344518
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-344518-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:26:02.098403  764731 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:26:02.098709  764731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:26:02.098722  764731 out.go:304] Setting ErrFile to fd 2...
	I0729 20:26:02.098729  764731 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:26:02.098992  764731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:26:02.099198  764731 out.go:298] Setting JSON to false
	I0729 20:26:02.099224  764731 mustload.go:65] Loading cluster: ha-344518
	I0729 20:26:02.099289  764731 notify.go:220] Checking for updates...
	I0729 20:26:02.099611  764731 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:26:02.099627  764731 status.go:255] checking status of ha-344518 ...
	I0729 20:26:02.099989  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.100069  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.124216  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41621
	I0729 20:26:02.124787  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.125373  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.125389  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.125758  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.125951  764731 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:26:02.127684  764731 status.go:330] ha-344518 host status = "Running" (err=<nil>)
	I0729 20:26:02.127700  764731 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:26:02.127982  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.128017  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.142705  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37425
	I0729 20:26:02.143130  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.143580  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.143601  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.143948  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.144153  764731 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:26:02.147576  764731 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:26:02.148060  764731 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:26:02.148086  764731 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:26:02.148194  764731 host.go:66] Checking if "ha-344518" exists ...
	I0729 20:26:02.148494  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.148527  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.163308  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40859
	I0729 20:26:02.163788  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.164257  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.164276  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.164591  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.164783  764731 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:26:02.165029  764731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:26:02.165056  764731 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:26:02.167299  764731 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:26:02.167726  764731 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:26:02.167752  764731 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:26:02.167857  764731 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:26:02.168021  764731 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:26:02.168186  764731 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:26:02.168321  764731 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:26:02.248983  764731 ssh_runner.go:195] Run: systemctl --version
	I0729 20:26:02.255980  764731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:26:02.271316  764731 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:26:02.271347  764731 api_server.go:166] Checking apiserver status ...
	I0729 20:26:02.271394  764731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:26:02.289463  764731 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/4987/cgroup
	W0729 20:26:02.298929  764731 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/4987/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:26:02.298976  764731 ssh_runner.go:195] Run: ls
	I0729 20:26:02.305957  764731 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:26:02.310730  764731 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:26:02.310755  764731 status.go:422] ha-344518 apiserver status = Running (err=<nil>)
	I0729 20:26:02.310765  764731 status.go:257] ha-344518 status: &{Name:ha-344518 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:26:02.310781  764731 status.go:255] checking status of ha-344518-m02 ...
	I0729 20:26:02.311090  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.311131  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.326300  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38229
	I0729 20:26:02.326840  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.327374  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.327398  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.327719  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.327949  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetState
	I0729 20:26:02.329541  764731 status.go:330] ha-344518-m02 host status = "Running" (err=<nil>)
	I0729 20:26:02.329560  764731 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:26:02.329848  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.329889  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.345622  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40123
	I0729 20:26:02.346090  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.346564  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.346589  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.346915  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.347121  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetIP
	I0729 20:26:02.350246  764731 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:26:02.350702  764731 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:21:29 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:26:02.350731  764731 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:26:02.350867  764731 host.go:66] Checking if "ha-344518-m02" exists ...
	I0729 20:26:02.351212  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.351278  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.366296  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45429
	I0729 20:26:02.366722  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.367404  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.367424  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.367782  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.367985  764731 main.go:141] libmachine: (ha-344518-m02) Calling .DriverName
	I0729 20:26:02.368215  764731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:26:02.368240  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHHostname
	I0729 20:26:02.371198  764731 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:26:02.371598  764731 main.go:141] libmachine: (ha-344518-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:a4:74", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:21:29 +0000 UTC Type:0 Mac:52:54:00:24:a4:74 Iaid: IPaddr:192.168.39.104 Prefix:24 Hostname:ha-344518-m02 Clientid:01:52:54:00:24:a4:74}
	I0729 20:26:02.371639  764731 main.go:141] libmachine: (ha-344518-m02) DBG | domain ha-344518-m02 has defined IP address 192.168.39.104 and MAC address 52:54:00:24:a4:74 in network mk-ha-344518
	I0729 20:26:02.371756  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHPort
	I0729 20:26:02.371939  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHKeyPath
	I0729 20:26:02.372126  764731 main.go:141] libmachine: (ha-344518-m02) Calling .GetSSHUsername
	I0729 20:26:02.372276  764731 sshutil.go:53] new ssh client: &{IP:192.168.39.104 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m02/id_rsa Username:docker}
	I0729 20:26:02.457207  764731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:26:02.475766  764731 kubeconfig.go:125] found "ha-344518" server: "https://192.168.39.254:8443"
	I0729 20:26:02.475798  764731 api_server.go:166] Checking apiserver status ...
	I0729 20:26:02.475833  764731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:26:02.494077  764731 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup
	W0729 20:26:02.504015  764731 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1528/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:26:02.504093  764731 ssh_runner.go:195] Run: ls
	I0729 20:26:02.508646  764731 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0729 20:26:02.512856  764731 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0729 20:26:02.512877  764731 status.go:422] ha-344518-m02 apiserver status = Running (err=<nil>)
	I0729 20:26:02.512887  764731 status.go:257] ha-344518-m02 status: &{Name:ha-344518-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:26:02.512902  764731 status.go:255] checking status of ha-344518-m04 ...
	I0729 20:26:02.513215  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.513252  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.528544  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41143
	I0729 20:26:02.529002  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.529466  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.529494  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.529834  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.530049  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetState
	I0729 20:26:02.531715  764731 status.go:330] ha-344518-m04 host status = "Running" (err=<nil>)
	I0729 20:26:02.531730  764731 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:26:02.532047  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.532102  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.546935  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39677
	I0729 20:26:02.547479  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.547924  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.547950  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.548300  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.548476  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetIP
	I0729 20:26:02.551425  764731 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:26:02.551861  764731 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:23:28 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:26:02.551880  764731 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:26:02.552066  764731 host.go:66] Checking if "ha-344518-m04" exists ...
	I0729 20:26:02.552353  764731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:26:02.552386  764731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:26:02.567377  764731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0729 20:26:02.567861  764731 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:26:02.568412  764731 main.go:141] libmachine: Using API Version  1
	I0729 20:26:02.568436  764731 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:26:02.568749  764731 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:26:02.568918  764731 main.go:141] libmachine: (ha-344518-m04) Calling .DriverName
	I0729 20:26:02.569125  764731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:26:02.569145  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHHostname
	I0729 20:26:02.571450  764731 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:26:02.571819  764731 main.go:141] libmachine: (ha-344518-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:dd:b2", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:23:28 +0000 UTC Type:0 Mac:52:54:00:dd:dd:b2 Iaid: IPaddr:192.168.39.70 Prefix:24 Hostname:ha-344518-m04 Clientid:01:52:54:00:dd:dd:b2}
	I0729 20:26:02.571841  764731 main.go:141] libmachine: (ha-344518-m04) DBG | domain ha-344518-m04 has defined IP address 192.168.39.70 and MAC address 52:54:00:dd:dd:b2 in network mk-ha-344518
	I0729 20:26:02.571945  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHPort
	I0729 20:26:02.572154  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHKeyPath
	I0729 20:26:02.572412  764731 main.go:141] libmachine: (ha-344518-m04) Calling .GetSSHUsername
	I0729 20:26:02.572595  764731 sshutil.go:53] new ssh client: &{IP:192.168.39.70 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518-m04/id_rsa Username:docker}
	W0729 20:26:20.904255  764731 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.70:22: connect: no route to host
	W0729 20:26:20.904432  764731 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	E0729 20:26:20.904455  764731 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host
	I0729 20:26:20.904462  764731 status.go:257] ha-344518-m04 status: &{Name:ha-344518-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0729 20:26:20.904489  764731 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.70:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-344518 -n ha-344518
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-344518 logs -n 25: (1.638714661s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m04 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp testdata/cp-test.txt                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518:/home/docker/cp-test_ha-344518-m04_ha-344518.txt                       |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518 sudo cat                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518.txt                                 |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m02:/home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m02 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m03:/home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n                                                                 | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | ha-344518-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-344518 ssh -n ha-344518-m03 sudo cat                                          | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC | 29 Jul 24 20:14 UTC |
	|         | /home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-344518 node stop m02 -v=7                                                     | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-344518 node start m02 -v=7                                                    | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:16 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-344518 -v=7                                                           | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-344518 -v=7                                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-344518 --wait=true -v=7                                                    | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:19 UTC | 29 Jul 24 20:23 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-344518                                                                | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:23 UTC |                     |
	| node    | ha-344518 node delete m03 -v=7                                                   | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:23 UTC | 29 Jul 24 20:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-344518 stop -v=7                                                              | ha-344518 | jenkins | v1.33.1 | 29 Jul 24 20:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:19:39
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:19:39.326027  762482 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:19:39.326148  762482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:19:39.326157  762482 out.go:304] Setting ErrFile to fd 2...
	I0729 20:19:39.326161  762482 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:19:39.326347  762482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:19:39.326905  762482 out.go:298] Setting JSON to false
	I0729 20:19:39.327899  762482 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":14526,"bootTime":1722269853,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:19:39.327965  762482 start.go:139] virtualization: kvm guest
	I0729 20:19:39.330992  762482 out.go:177] * [ha-344518] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:19:39.332536  762482 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:19:39.332574  762482 notify.go:220] Checking for updates...
	I0729 20:19:39.335386  762482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:19:39.336598  762482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:19:39.337778  762482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:19:39.339087  762482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:19:39.340542  762482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:19:39.342366  762482 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:19:39.342512  762482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:19:39.343163  762482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:19:39.343267  762482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:19:39.358835  762482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40331
	I0729 20:19:39.359304  762482 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:19:39.359949  762482 main.go:141] libmachine: Using API Version  1
	I0729 20:19:39.359972  762482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:19:39.360420  762482 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:19:39.360643  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.396513  762482 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:19:39.397700  762482 start.go:297] selected driver: kvm2
	I0729 20:19:39.397713  762482 start.go:901] validating driver "kvm2" against &{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk
:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:19:39.397853  762482 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:19:39.398178  762482 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:19:39.398249  762482 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:19:39.414151  762482 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:19:39.414862  762482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:19:39.414893  762482 cni.go:84] Creating CNI manager for ""
	I0729 20:19:39.414899  762482 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 20:19:39.414974  762482 start.go:340] cluster config:
	{Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-till
er:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:19:39.415107  762482 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:19:39.418022  762482 out.go:177] * Starting "ha-344518" primary control-plane node in "ha-344518" cluster
	I0729 20:19:39.419422  762482 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:19:39.419468  762482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:19:39.419480  762482 cache.go:56] Caching tarball of preloaded images
	I0729 20:19:39.419613  762482 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:19:39.419627  762482 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:19:39.419768  762482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/config.json ...
	I0729 20:19:39.419988  762482 start.go:360] acquireMachinesLock for ha-344518: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:19:39.420055  762482 start.go:364] duration metric: took 43.383µs to acquireMachinesLock for "ha-344518"
	I0729 20:19:39.420077  762482 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:19:39.420085  762482 fix.go:54] fixHost starting: 
	I0729 20:19:39.420366  762482 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:19:39.420403  762482 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:19:39.436580  762482 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0729 20:19:39.437057  762482 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:19:39.437566  762482 main.go:141] libmachine: Using API Version  1
	I0729 20:19:39.437610  762482 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:19:39.437965  762482 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:19:39.438204  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.438357  762482 main.go:141] libmachine: (ha-344518) Calling .GetState
	I0729 20:19:39.440159  762482 fix.go:112] recreateIfNeeded on ha-344518: state=Running err=<nil>
	W0729 20:19:39.440191  762482 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:19:39.442107  762482 out.go:177] * Updating the running kvm2 "ha-344518" VM ...
	I0729 20:19:39.443560  762482 machine.go:94] provisionDockerMachine start ...
	I0729 20:19:39.443586  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:19:39.443815  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.447224  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.447838  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.447873  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.448120  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.448347  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.448519  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.448661  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.448833  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.449040  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.449053  762482 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:19:39.556910  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:19:39.556954  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.557281  762482 buildroot.go:166] provisioning hostname "ha-344518"
	I0729 20:19:39.557315  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.557528  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.560296  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.560652  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.560680  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.560865  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.561075  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.561215  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.561340  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.561498  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.561675  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.561686  762482 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-344518 && echo "ha-344518" | sudo tee /etc/hostname
	I0729 20:19:39.682863  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-344518
	
	I0729 20:19:39.682899  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.685791  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.686224  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.686252  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.686435  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:39.686630  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.686863  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:39.687132  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:39.687359  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:39.687585  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:39.687602  762482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-344518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-344518/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-344518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:19:39.792936  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:19:39.792970  762482 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:19:39.793010  762482 buildroot.go:174] setting up certificates
	I0729 20:19:39.793020  762482 provision.go:84] configureAuth start
	I0729 20:19:39.793030  762482 main.go:141] libmachine: (ha-344518) Calling .GetMachineName
	I0729 20:19:39.793319  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:19:39.796203  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.796591  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.796628  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.796739  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:39.799195  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.799707  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:39.799733  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:39.799884  762482 provision.go:143] copyHostCerts
	I0729 20:19:39.799941  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:19:39.799991  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:19:39.800008  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:19:39.800204  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:19:39.800337  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:19:39.800371  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:19:39.800382  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:19:39.800425  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:19:39.800485  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:19:39.800509  762482 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:19:39.800517  762482 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:19:39.800551  762482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:19:39.800616  762482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.ha-344518 san=[127.0.0.1 192.168.39.238 ha-344518 localhost minikube]
	I0729 20:19:39.998916  762482 provision.go:177] copyRemoteCerts
	I0729 20:19:39.999008  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:19:39.999046  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:40.002019  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.002486  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:40.002516  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.002762  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:40.003013  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.003162  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:40.003293  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:19:40.086393  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:19:40.086462  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:19:40.111405  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:19:40.111509  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:19:40.134834  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:19:40.134924  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0729 20:19:40.158251  762482 provision.go:87] duration metric: took 365.212503ms to configureAuth
	I0729 20:19:40.158286  762482 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:19:40.158528  762482 config.go:182] Loaded profile config "ha-344518": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:19:40.158613  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:19:40.160989  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.161368  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:19:40.161395  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:19:40.161653  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:19:40.161891  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.162084  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:19:40.162220  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:19:40.162427  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:19:40.162592  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:19:40.162605  762482 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:21:11.038553  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:21:11.038583  762482 machine.go:97] duration metric: took 1m31.595004592s to provisionDockerMachine
	I0729 20:21:11.038596  762482 start.go:293] postStartSetup for "ha-344518" (driver="kvm2")
	I0729 20:21:11.038609  762482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:21:11.038652  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.039094  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:21:11.039126  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.042368  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.042798  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.042821  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.043073  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.043281  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.043448  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.043569  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.126877  762482 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:21:11.130842  762482 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:21:11.130865  762482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:21:11.130933  762482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:21:11.131031  762482 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:21:11.131051  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:21:11.131154  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:21:11.140934  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:21:11.164513  762482 start.go:296] duration metric: took 125.901681ms for postStartSetup
	I0729 20:21:11.164567  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.164866  762482 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0729 20:21:11.164898  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.167772  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.168227  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.168252  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.168407  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.168678  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.168852  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.169002  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	W0729 20:21:11.250070  762482 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0729 20:21:11.250106  762482 fix.go:56] duration metric: took 1m31.830020604s for fixHost
	I0729 20:21:11.250135  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.253222  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.253670  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.253699  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.253863  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.254082  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.254243  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.254409  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.254596  762482 main.go:141] libmachine: Using SSH client type: native
	I0729 20:21:11.254795  762482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.238 22 <nil> <nil>}
	I0729 20:21:11.254809  762482 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:21:11.356735  762482 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722284471.314847115
	
	I0729 20:21:11.356758  762482 fix.go:216] guest clock: 1722284471.314847115
	I0729 20:21:11.356768  762482 fix.go:229] Guest: 2024-07-29 20:21:11.314847115 +0000 UTC Remote: 2024-07-29 20:21:11.250115186 +0000 UTC m=+91.960846804 (delta=64.731929ms)
	I0729 20:21:11.356820  762482 fix.go:200] guest clock delta is within tolerance: 64.731929ms
	I0729 20:21:11.356830  762482 start.go:83] releasing machines lock for "ha-344518", held for 1m31.936761283s
	I0729 20:21:11.356861  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.357169  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:21:11.359989  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.360441  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.360471  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.360656  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361209  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361397  762482 main.go:141] libmachine: (ha-344518) Calling .DriverName
	I0729 20:21:11.361498  762482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:21:11.361547  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.361675  762482 ssh_runner.go:195] Run: cat /version.json
	I0729 20:21:11.361702  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHHostname
	I0729 20:21:11.364232  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364323  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364683  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.364709  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364736  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:11.364764  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:11.364888  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.365017  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHPort
	I0729 20:21:11.365084  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.365158  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHKeyPath
	I0729 20:21:11.365229  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.365304  762482 main.go:141] libmachine: (ha-344518) Calling .GetSSHUsername
	I0729 20:21:11.365393  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.365449  762482 sshutil.go:53] new ssh client: &{IP:192.168.39.238 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/ha-344518/id_rsa Username:docker}
	I0729 20:21:11.470373  762482 ssh_runner.go:195] Run: systemctl --version
	I0729 20:21:11.476161  762482 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:21:11.635605  762482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:21:11.642978  762482 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:21:11.643057  762482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:21:11.652341  762482 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 20:21:11.652363  762482 start.go:495] detecting cgroup driver to use...
	I0729 20:21:11.652444  762482 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:21:11.669843  762482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:21:11.684206  762482 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:21:11.684321  762482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:21:11.697442  762482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:21:11.710557  762482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:21:11.852585  762482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:21:12.006900  762482 docker.go:232] disabling docker service ...
	I0729 20:21:12.006976  762482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:21:12.023767  762482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:21:12.036385  762482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:21:12.182468  762482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:21:12.331276  762482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:21:12.344361  762482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:21:12.361855  762482 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:21:12.361938  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.371774  762482 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:21:12.371838  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.381257  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.390988  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.400810  762482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:21:12.411065  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.421009  762482 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.431791  762482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:21:12.441246  762482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:21:12.450184  762482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:21:12.459022  762482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:21:12.592451  762482 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:21:17.974217  762482 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.381434589s)
	I0729 20:21:17.974317  762482 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:21:17.974413  762482 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:21:17.980046  762482 start.go:563] Will wait 60s for crictl version
	I0729 20:21:17.980110  762482 ssh_runner.go:195] Run: which crictl
	I0729 20:21:17.983800  762482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:21:18.024115  762482 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:21:18.024213  762482 ssh_runner.go:195] Run: crio --version
	I0729 20:21:18.052211  762482 ssh_runner.go:195] Run: crio --version
	I0729 20:21:18.085384  762482 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:21:18.086779  762482 main.go:141] libmachine: (ha-344518) Calling .GetIP
	I0729 20:21:18.089632  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:18.090044  762482 main.go:141] libmachine: (ha-344518) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e2:94:80", ip: ""} in network mk-ha-344518: {Iface:virbr1 ExpiryTime:2024-07-29 21:09:20 +0000 UTC Type:0 Mac:52:54:00:e2:94:80 Iaid: IPaddr:192.168.39.238 Prefix:24 Hostname:ha-344518 Clientid:01:52:54:00:e2:94:80}
	I0729 20:21:18.090073  762482 main.go:141] libmachine: (ha-344518) DBG | domain ha-344518 has defined IP address 192.168.39.238 and MAC address 52:54:00:e2:94:80 in network mk-ha-344518
	I0729 20:21:18.090330  762482 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:21:18.095050  762482 kubeadm.go:883] updating cluster {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:21:18.095205  762482 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:21:18.095246  762482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:21:18.139778  762482 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:21:18.139811  762482 crio.go:433] Images already preloaded, skipping extraction
	I0729 20:21:18.139866  762482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:21:18.171773  762482 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:21:18.171813  762482 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:21:18.171827  762482 kubeadm.go:934] updating node { 192.168.39.238 8443 v1.30.3 crio true true} ...
	I0729 20:21:18.171974  762482 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-344518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.238
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:21:18.172076  762482 ssh_runner.go:195] Run: crio config
	I0729 20:21:18.219780  762482 cni.go:84] Creating CNI manager for ""
	I0729 20:21:18.219805  762482 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0729 20:21:18.219821  762482 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:21:18.219851  762482 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.238 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-344518 NodeName:ha-344518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.238"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.238 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:21:18.220015  762482 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.238
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-344518"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.238
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.238"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:21:18.220057  762482 kube-vip.go:115] generating kube-vip config ...
	I0729 20:21:18.220119  762482 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0729 20:21:18.230986  762482 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0729 20:21:18.231115  762482 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0729 20:21:18.231178  762482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:21:18.240550  762482 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:21:18.240617  762482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0729 20:21:18.249593  762482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0729 20:21:18.265334  762482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:21:18.280224  762482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 20:21:18.295551  762482 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0729 20:21:18.311581  762482 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0729 20:21:18.315365  762482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:21:18.458156  762482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:21:18.483712  762482 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518 for IP: 192.168.39.238
	I0729 20:21:18.483742  762482 certs.go:194] generating shared ca certs ...
	I0729 20:21:18.483775  762482 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.483997  762482 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:21:18.484094  762482 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:21:18.484113  762482 certs.go:256] generating profile certs ...
	I0729 20:21:18.484246  762482 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/client.key
	I0729 20:21:18.484279  762482 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68
	I0729 20:21:18.484296  762482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.238 192.168.39.104 192.168.39.53 192.168.39.254]
	I0729 20:21:18.619358  762482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 ...
	I0729 20:21:18.619398  762482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68: {Name:mkd34a221960939dcd8a99abb5e8f25076f38c80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.619593  762482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68 ...
	I0729 20:21:18.619606  762482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68: {Name:mk8039e4c36f36c5da11f7adf9b8bbc5fb38ef2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:21:18.619682  762482 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt.93cf0b68 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt
	I0729 20:21:18.619842  762482 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key.93cf0b68 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key
	I0729 20:21:18.619985  762482 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key
	I0729 20:21:18.620002  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:21:18.620015  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:21:18.620027  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:21:18.620066  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:21:18.620085  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:21:18.620110  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:21:18.620131  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:21:18.620149  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:21:18.620216  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:21:18.620251  762482 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:21:18.620261  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:21:18.620283  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:21:18.620311  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:21:18.620335  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:21:18.620374  762482 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:21:18.620402  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.620416  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.620428  762482 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.621049  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:21:18.645035  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:21:18.667132  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:21:18.692169  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:21:18.714701  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0729 20:21:18.740573  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:21:18.765567  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:21:18.790733  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/ha-344518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:21:18.816161  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:21:18.840834  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:21:18.865904  762482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:21:18.891428  762482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:21:18.909205  762482 ssh_runner.go:195] Run: openssl version
	I0729 20:21:18.915521  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:21:18.926126  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.930619  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.930674  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:21:18.936355  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:21:18.945676  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:21:18.955672  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.959827  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.959936  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:21:18.965137  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:21:18.973911  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:21:18.983957  762482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.988251  762482 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.988310  762482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:21:18.993596  762482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:21:19.002786  762482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:21:19.007180  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:21:19.012528  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:21:19.017944  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:21:19.023055  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:21:19.028572  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:21:19.033785  762482 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:21:19.039103  762482 kubeadm.go:392] StartCluster: {Name:ha-344518 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-344518 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.238 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.104 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.53 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.70 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpo
d:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:21:19.039274  762482 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:21:19.039330  762482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:21:19.076562  762482 cri.go:89] found id: "2be21b3762e2b8f6207f4c6b63f22b53b15d2459ce4818a52d71a0219a66b4aa"
	I0729 20:21:19.076588  762482 cri.go:89] found id: "3c06c4829c7e53e9437b7427b8b47e0ba76a5f614452c9d673ed69fedae6922b"
	I0729 20:21:19.076592  762482 cri.go:89] found id: "ff31897b9a6449fdc1cf23b389b94e26797efbc68df8d8104de119eb5c9dd498"
	I0729 20:21:19.076595  762482 cri.go:89] found id: "7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c"
	I0729 20:21:19.076598  762482 cri.go:89] found id: "150057459b6854002f094be091609a708f47a33e024e971dd0a52ee45059feea"
	I0729 20:21:19.076601  762482 cri.go:89] found id: "4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a"
	I0729 20:21:19.076603  762482 cri.go:89] found id: "594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f"
	I0729 20:21:19.076606  762482 cri.go:89] found id: "d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454"
	I0729 20:21:19.076608  762482 cri.go:89] found id: "a5bf9f11f403485bba11bb296707954ef1f3951cd0686f3c2aef04ec544f6dfb"
	I0729 20:21:19.076615  762482 cri.go:89] found id: "1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be"
	I0729 20:21:19.076622  762482 cri.go:89] found id: "d1cab255995a78a5644e30400e94f037504f1f6a162cac7023d3b2074899a0e7"
	I0729 20:21:19.076626  762482 cri.go:89] found id: "3e957bb1c15cb6b1d0159a0941f43678dfa08f25dc582d6dd58a8d0b4f5f5c00"
	I0729 20:21:19.076630  762482 cri.go:89] found id: "a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50"
	I0729 20:21:19.076636  762482 cri.go:89] found id: ""
	I0729 20:21:19.076689  762482 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.556181613Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0b30fb64-53e2-453d-b943-3f1fe8693be9 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.557555001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d657ef27-c2fa-46d8-8709-42fc9dfe59aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.558024069Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284781557995600,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d657ef27-c2fa-46d8-8709-42fc9dfe59aa name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.558650570Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5b7fb185-355e-452b-8d4e-297124f04db7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.558713724Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5b7fb185-355e-452b-8d4e-297124f04db7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.559132037Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5b7fb185-355e-452b-8d4e-297124f04db7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.599993736Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12ef5219-aaf2-4ea3-873e-101e135f93da name=/runtime.v1.RuntimeService/Version
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.600076969Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12ef5219-aaf2-4ea3-873e-101e135f93da name=/runtime.v1.RuntimeService/Version
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.601875230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f6b086ff-1d75-4df3-b938-96b143a52c5b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.602571264Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284781602541623,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f6b086ff-1d75-4df3-b938-96b143a52c5b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.603074985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=296ec0e3-8326-4c51-a7ab-1dffdade421a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.603134802Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=296ec0e3-8326-4c51-a7ab-1dffdade421a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.603617282Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=296ec0e3-8326-4c51-a7ab-1dffdade421a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.648130583Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=88c78ff7-4071-4f93-9303-63fca4fdb706 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.648266802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=88c78ff7-4071-4f93-9303-63fca4fdb706 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.649441586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ea92e96-d6f9-4ba2-a6fd-09db050250f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.650054411Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722284781649863822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:154769,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ea92e96-d6f9-4ba2-a6fd-09db050250f9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.650530210Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ca79f559-94c9-4367-ae82-93237d944a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.650586522Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ca79f559-94c9-4367-ae82-93237d944a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.651112831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722284523690878522,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminat
ionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.k
ubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePer
iod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f78626ab
b7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,
io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722284485467809885,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 2,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.t
erminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722284485405022048,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\
"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:962f37271e54dfdf4d3460a9fa3133b43ba8774f3d2128c7094db5069252fdb2,PodSandboxId:4fd5554044288cdeb93fe71084f0294ef4186c2cbadf51a4522cef38a2f9defc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722283977503515029,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annot
ations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c,PodSandboxId:e6598d2da30cda28e0a3e88c40e1dfeeb755974b91bf8f1b5dfa6663fd6a0f39,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817764610242,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kube
rnetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a,PodSandboxId:ffb2234aef19148fc9191a03b19f4a6aae2c785b559f39d68ecb417bf19ffd60,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722283817701895522,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f,PodSandboxId:aa3121e476fc29995d7eba651757a8a993d4a0714a4fd0b0c20be89333c38988,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]
string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722283806075712230,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454,PodSandboxId:08408a18bb915b39f6e00005f088f02483b65e6577c1ab56fe4eef2cad62896f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,Runtime
Handler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722283802307894993,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50,PodSandboxId:259cc56efacfddd14de1d8445533ceda2c0f4115c95c835f73a20d3bec410749,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f06
2788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722283781396492612,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be,PodSandboxId:b61bed291d877e8adf3dc3887b766a50c91b6f2cbb622ee9efba9e1c77067185,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795
e2,State:CONTAINER_EXITED,CreatedAt:1722283781452418652,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ca79f559-94c9-4367-ae82-93237d944a4f name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.657038891Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bcfce277-12c0-4318-8802-b3da461176c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.657608300Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&PodSandboxMetadata{Name:busybox-fc5497c4f-fp24v,Uid:34dba935-70e7-453a-996e-56c88c2e27ab,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284518814794426,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,pod-template-hash: fc5497c4f,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:12:53.665052560Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-344518,Uid:006bc482e26170b5fd3d9110ea9ae2fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_RE
ADY,CreatedAt:1722284500624110438,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{kubernetes.io/config.hash: 006bc482e26170b5fd3d9110ea9ae2fa,kubernetes.io/config.seen: 2024-07-29T20:21:18.269724466Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-xpkp6,Uid:89bb48a7-72c4-4f23-aad8-530fc74e76e0,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485135247598,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07
-29T20:10:17.183921831Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-344518,Uid:1a4f4fa7d6914af3b75fc6bf4496723b,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485118792773,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 1a4f4fa7d6914af3b75fc6bf4496723b,kubernetes.io/config.seen: 2024-07-29T20:09:47.600273531Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-344518,Uid:0fe3753966d0edf57072c858a7289147,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,Cr
eatedAt:1722284485091352634,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.238:8443,kubernetes.io/config.hash: 0fe3753966d0edf57072c858a7289147,kubernetes.io/config.seen: 2024-07-29T20:09:47.600279091Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-344518,Uid:cd59779c0bf07be17ee08a6f723c6a83,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485086414587,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cd59779c0bf07be17ee08a6f723c6a83,kubernetes.io/config.seen: 2024-07-29T20:09:47.600280017Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9e8bd9d2-8adf-47de-8e32-05d64002a631,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485067709203,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.i
o/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-07-29T20:10:17.190379867Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&PodSandboxMetadata{Name:kube-proxy-fh6rg,Uid:275f3f36-39e1-461a-9c4d-4b2d8773d325,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485066812488,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernet
es.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:01.298076534Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&PodSandboxMetadata{Name:etcd-ha-344518,Uid:2baca04111e38314ac51bacec8d115e3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485047822208,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.238:2379,kubernetes.io/config.hash: 2baca04111e38314ac51bacec8d115e3,kubernetes.io/config.seen: 2024-07-29T20:09:47.60027
7914Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&PodSandboxMetadata{Name:kindnet-nl4kz,Uid:39441191-433d-4abc-b0c8-d4114713f68a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284485025922279,Labels:map[string]string{app: kindnet,controller-revision-hash: 549967b474,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:01.284341089Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-wzmc5,Uid:2badd33a-9085-4e72-9934-f31c6142556e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722284479135271775,L
abels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T20:10:17.190603996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bcfce277-12c0-4318-8802-b3da461176c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.658635856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c88cee47-3a1b-4f64-940c-8de5c3e7e0af name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.658704331Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c88cee47-3a1b-4f64-940c-8de5c3e7e0af name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:26:21 ha-344518 crio[3752]: time="2024-07-29 20:26:21.658915605Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d2991c901db3e1b2a53efc55a0d386d4041030802fb3328bd23a4aa5102c7cd3,PodSandboxId:8577d7c915c6b6dbbc80b4ffbc8098f7fc10ac188c6ae83ea9559e394737891f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722284575664724942,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e8bd9d2-8adf-47de-8e32-05d64002a631,},Annotations:map[string]string{io.kubernetes.container.hash: e7192524,io.kubernetes.container.restartCount: 4,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890,PodSandboxId:00a6fc7aabd3b4f0fbbc25f148b4ff8d399c9d4631d7914e909c53d120b74249,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722284529667731964,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd59779c0bf07be17ee08a6f723c6a83,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.conta
iner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889,PodSandboxId:271d8a2d814274e6d87bfe3f11c2097acecaaf2d037ec7f1c49d0d71f66da75f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722284523671176009,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0fe3753966d0edf57072c858a7289147,},Annotations:map[string]string{io.kubernetes.container.hash: 77025bd7,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42f63c809b9609669840eaf7839a4f8ec6df83b06781be68768c1d3b6bd5ecea,PodSandboxId:e2ddd8e9a60986cf2dc29be143b6bcb581574621244e65cbfa8a976a1b8bf857,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722284518947381848,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-fp24v,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 34dba935-70e7-453a-996e-56c88c2e27ab,},Annotations:map[string]string{io.kubernetes.container.hash: b5d9aa47,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kube
rnetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d82a77f6b5639a109c085d53999ee012c50f9a9f038a9310a3ee01a61c73e937,PodSandboxId:71dacd5a14cd22bfc4ae3c928dbae9aab210b6fd9afc190f92ab3100aa1a4a9c,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12,State:CONTAINER_RUNNING,CreatedAt:1722284500722364305,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 006bc482e26170b5fd3d9110ea9ae2fa,},Annotations:map[string]string{io.kubernetes.container.hash: 3d55e6dc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePoli
cy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c,PodSandboxId:676f5bb2dc61a1b314272543ab31f592744af8c73016a17c0c0068aecacbb23d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722284485769533868,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fh6rg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 275f3f36-39e1-461a-9c4d-4b2d8773d325,},Annotations:map[string]string{io.kubernetes.container.hash: 19d850dc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.termin
ationGracePeriod: 30,},},&Container{Id:80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec,PodSandboxId:b7e0a5882dba451ee7766f690b54d720326bbd23f2721d9bb96f998112cbc402,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722284485543956290,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-nl4kz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 39441191-433d-4abc-b0c8-d4114713f68a,},Annotations:map[string]string{io.kubernetes.container.hash: 581148ed,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Containe
r{Id:f78626abb7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6,PodSandboxId:052dda1765b54783be9a3b5fd109f8c1c982f804666524462cb495164a2f9edc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284485608060841,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-xpkp6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89bb48a7-72c4-4f23-aad8-530fc74e76e0,},Annotations:map[string]string{io.kubernetes.container.hash: 1429c7c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.res
tartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3,PodSandboxId:447f30df39f92edeb32c865396a059a31e9948fdce84f9faced2368d3a8a9343,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722284485477507106,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a4f4fa7d6914af3b75fc6bf4496723b,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c,PodSandboxId:bca03cf071a72d5a04b8c088258f9d90d0fab2624dd814d39b8ec4bf6b99c1e2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722284485361801530,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-344518,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2baca04111e38314ac51bacec8d115e3,},Annotations:map[string]string{io.kubernetes.container.hash: 428d02d3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0,PodSandboxId:6fbba4fa017b1801e8194ba184ec2b3ef3dbdc0e1af8aa12f5e6c7782e840c02,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722284479277598584,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-wzmc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2badd33a-9085-4e72-9934-f31c6142556e,},Annotations:map[string]string{io.kubernetes.container.hash: e8301aed,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort
\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c88cee47-3a1b-4f64-940c-8de5c3e7e0af name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d2991c901db3e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   8577d7c915c6b       storage-provisioner
	6269dfd02a3c7       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   2                   00a6fc7aabd3b       kube-controller-manager-ha-344518
	c18f890b02c28       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   8577d7c915c6b       storage-provisioner
	898a9f8b1999b       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            3                   271d8a2d81427       kube-apiserver-ha-344518
	42f63c809b960       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   e2ddd8e9a6098       busybox-fc5497c4f-fp24v
	d82a77f6b5639       38af8ddebf499adc4631fe68b0ee224ffd6d7dd6b4aeeb393aff3d33cb94eb12                                      4 minutes ago       Running             kube-vip                  0                   71dacd5a14cd2       kube-vip-ha-344518
	5bad9db5d0866       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   676f5bb2dc61a       kube-proxy-fh6rg
	f78626abb7e0b       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Running             coredns                   1                   052dda1765b54       coredns-7db6d8ff4d-xpkp6
	80e938336fd3e       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   b7e0a5882dba4       kindnet-nl4kz
	c89a9f7056c1b       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   447f30df39f92       kube-scheduler-ha-344518
	5882d9060c0d6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Exited              kube-apiserver            2                   271d8a2d81427       kube-apiserver-ha-344518
	cab86b8020816       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Exited              kube-controller-manager   1                   00a6fc7aabd3b       kube-controller-manager-ha-344518
	973ffc8ba5042       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   bca03cf071a72       etcd-ha-344518
	8042b04ce3ea9       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      5 minutes ago       Running             coredns                   1                   6fbba4fa017b1       coredns-7db6d8ff4d-wzmc5
	962f37271e54d       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   4fd5554044288       busybox-fc5497c4f-fp24v
	7bed7bb792810       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   e6598d2da30cd       coredns-7db6d8ff4d-xpkp6
	4d27dc2036f3c       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      16 minutes ago      Exited              coredns                   0                   ffb2234aef191       coredns-7db6d8ff4d-wzmc5
	594577e4d332f       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    16 minutes ago      Exited              kindnet-cni               0                   aa3121e476fc2       kindnet-nl4kz
	d79e4f49251f6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      16 minutes ago      Exited              kube-proxy                0                   08408a18bb915       kube-proxy-fh6rg
	1121b90510c21       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      16 minutes ago      Exited              kube-scheduler            0                   b61bed291d877       kube-scheduler-ha-344518
	a0e14d313861e       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      16 minutes ago      Exited              etcd                      0                   259cc56efacfd       etcd-ha-344518
	
	
	==> coredns [4d27dc2036f3c9ad2ea5779684ae4b5c8cc2d51d441aba5ac39c7c221fef6d6a] <==
	[INFO] 10.244.2.2:35340 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003153904s
	[INFO] 10.244.2.2:54596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000140336s
	[INFO] 10.244.0.4:38854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949954s
	[INFO] 10.244.0.4:39933 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113699s
	[INFO] 10.244.0.4:54725 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150049s
	[INFO] 10.244.1.2:46191 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000115875s
	[INFO] 10.244.1.2:54023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001742745s
	[INFO] 10.244.1.2:51538 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140285s
	[INFO] 10.244.1.2:56008 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088578s
	[INFO] 10.244.2.2:44895 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000095319s
	[INFO] 10.244.2.2:40784 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000167082s
	[INFO] 10.244.0.4:48376 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120067s
	[INFO] 10.244.0.4:39840 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111609s
	[INFO] 10.244.0.4:38416 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058031s
	[INFO] 10.244.1.2:42578 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176608s
	[INFO] 10.244.2.2:48597 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000139446s
	[INFO] 10.244.2.2:51477 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106731s
	[INFO] 10.244.0.4:47399 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000109762s
	[INFO] 10.244.0.4:48496 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000126806s
	[INFO] 10.244.1.2:33090 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183559s
	[INFO] 10.244.1.2:58207 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000095513s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1898&timeout=6m37s&timeoutSeconds=397&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1871&timeout=6m4s&timeoutSeconds=364&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [7bed7bb7928108a5327aa6860c158b0b240804f8384d97301d1c79dbae5fd12c] <==
	[INFO] 10.244.2.2:40109 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000117861s
	[INFO] 10.244.0.4:43889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00020394s
	[INFO] 10.244.0.4:34685 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000072181s
	[INFO] 10.244.0.4:59825 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001335615s
	[INFO] 10.244.0.4:51461 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176686s
	[INFO] 10.244.0.4:35140 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000051586s
	[INFO] 10.244.1.2:54871 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115274s
	[INFO] 10.244.1.2:51590 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001521426s
	[INFO] 10.244.1.2:60677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00011059s
	[INFO] 10.244.1.2:48005 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106929s
	[INFO] 10.244.2.2:58992 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000110446s
	[INFO] 10.244.2.2:41728 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108732s
	[INFO] 10.244.0.4:38164 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104442s
	[INFO] 10.244.1.2:47258 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000118558s
	[INFO] 10.244.1.2:38089 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092315s
	[INFO] 10.244.1.2:33841 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075348s
	[INFO] 10.244.2.2:33549 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00013334s
	[INFO] 10.244.2.2:53967 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000203235s
	[INFO] 10.244.0.4:37211 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128698s
	[INFO] 10.244.0.4:50842 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000112886s
	[INFO] 10.244.1.2:51560 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000281444s
	[INFO] 10.244.1.2:48121 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072064s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1931&timeout=5m55s&timeoutSeconds=355&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [8042b04ce3ea942c684bd08b20f0c8e3b640b7a9be711fb638462d00df1694c0] <==
	Trace[1378146420]: [10.001558209s] [10.001558209s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[2023310062]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:28.851) (total time: 10001ms):
	Trace[2023310062]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:21:38.853)
	Trace[2023310062]: [10.001954743s] [10.001954743s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: Trace[1494062392]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:31.980) (total time: 10001ms):
	Trace[1494062392]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (20:21:41.981)
	Trace[1494062392]: [10.001660843s] [10.001660843s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52826->10.96.0.1:443: read: connection reset by peer
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:52826->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [f78626abb7e0bcf685d9788471a742f668b151da6faa3f321707ac63f8f1bbe6] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[1287796767]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:37.560) (total time: 10200ms):
	Trace[1287796767]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer 10200ms (20:21:47.760)
	Trace[1287796767]: [10.200304192s] [10.200304192s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53628->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2007934946]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Jul-2024 20:21:37.475) (total time: 10285ms):
	Trace[2007934946]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer 10285ms (20:21:47.761)
	Trace[2007934946]: [10.285863549s] [10.285863549s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:53616->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-344518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_09_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:09:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:26:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:09:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:22:08 +0000   Mon, 29 Jul 2024 20:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.238
	  Hostname:    ha-344518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 58926cc84a1545f2aed136a3e761f2be
	  System UUID:                58926cc8-4a15-45f2-aed1-36a3e761f2be
	  Boot ID:                    53511801-74aa-43cb-9108-0a1fffab4f32
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-fp24v              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-wzmc5             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-7db6d8ff4d-xpkp6             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-344518                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-nl4kz                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-344518             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-344518    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-fh6rg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-344518             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-344518                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   Starting                 4m13s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-344518 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-344518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-344518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-344518 status is now: NodeReady
	  Normal   RegisteredNode           14m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Warning  ContainerGCFailed        5m35s (x2 over 6m35s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	  Normal   RegisteredNode           3m3s                   node-controller  Node ha-344518 event: Registered Node ha-344518 in Controller
	
	
	Name:               ha-344518-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_11_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:26:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:22:51 +0000   Mon, 29 Jul 2024 20:22:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.104
	  Hostname:    ha-344518-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e624f7b4f7644519a6f4690f28614c0
	  System UUID:                9e624f7b-4f76-4451-9a6f-4690f28614c0
	  Boot ID:                    4306b075-ea2d-4345-8b6c-8e5f4f92efe0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xn8rr                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-344518-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-jj2b4                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-344518-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-344518-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-nfxp2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-344518-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-344518-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                    node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)      kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)      kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)      kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                    node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-344518-m02 status is now: NodeNotReady
	  Normal  Starting                 4m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m40s (x8 over 4m40s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m40s (x8 over 4m40s)  kubelet          Node ha-344518-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m40s (x7 over 4m40s)  kubelet          Node ha-344518-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           4m1s                   node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	  Normal  RegisteredNode           3m3s                   node-controller  Node ha-344518-m02 event: Registered Node ha-344518-m02 in Controller
	
	
	Name:               ha-344518-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-344518-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=ha-344518
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_13_30_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:13:30 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-344518-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:23:54 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 20:23:34 +0000   Mon, 29 Jul 2024 20:24:36 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.70
	  Hostname:    ha-344518-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8a26135ecab4ebcafa4c947c9d6f013
	  System UUID:                d8a26135-ecab-4ebc-afa4-c947c9d6f013
	  Boot ID:                    ee682693-6ce8-4022-a093-1d884cc6af51
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-97x95    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-4m6xw              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-947zc           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   RegisteredNode           12m                    node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeHasSufficientMemory  12m (x2 over 12m)      kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x2 over 12m)      kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x2 over 12m)      kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           12m                    node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-344518-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   RegisteredNode           4m1s                   node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   NodeNotReady             3m29s                  node-controller  Node ha-344518-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m3s                   node-controller  Node ha-344518-m04 event: Registered Node ha-344518-m04 in Controller
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-344518-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-344518-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-344518-m04 has been rebooted, boot id: ee682693-6ce8-4022-a093-1d884cc6af51
	  Normal   NodeReady                2m48s                  kubelet          Node ha-344518-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s                   node-controller  Node ha-344518-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.281405] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.054666] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.050707] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.158935] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.126079] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.245623] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[  +3.820743] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +4.869843] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.068841] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.242210] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.084855] kauditd_printk_skb: 79 callbacks suppressed
	[Jul29 20:10] kauditd_printk_skb: 21 callbacks suppressed
	[ +15.358609] kauditd_printk_skb: 38 callbacks suppressed
	[Jul29 20:11] kauditd_printk_skb: 26 callbacks suppressed
	[Jul29 20:21] systemd-fstab-generator[3672]: Ignoring "noauto" option for root device
	[  +0.144300] systemd-fstab-generator[3684]: Ignoring "noauto" option for root device
	[  +0.182848] systemd-fstab-generator[3698]: Ignoring "noauto" option for root device
	[  +0.146719] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.269424] systemd-fstab-generator[3738]: Ignoring "noauto" option for root device
	[  +5.858543] systemd-fstab-generator[3841]: Ignoring "noauto" option for root device
	[  +0.084685] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.631358] kauditd_printk_skb: 22 callbacks suppressed
	[ +12.253877] kauditd_printk_skb: 75 callbacks suppressed
	[ +10.069821] kauditd_printk_skb: 1 callbacks suppressed
	[Jul29 20:22] kauditd_printk_skb: 10 callbacks suppressed
	
	
	==> etcd [973ffc8ba50420879151776d566b23f1cb59e6893c2cc15be0144e5c2d193a7c] <==
	{"level":"info","ts":"2024-07-29T20:23:02.648668Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:02.648596Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"fff3906243738b90","to":"57cb2df333d7b24","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-29T20:23:02.648866Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:02.665844Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.53:33634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-29T20:23:06.109759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-07-29T20:23:06.110693Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"57cb2df333d7b24","rtt":"0s","error":"dial tcp 192.168.39.53:2380: connect: connection refused"}
	{"level":"info","ts":"2024-07-29T20:23:12.709689Z","caller":"traceutil/trace.go:171","msg":"trace[1507592919] transaction","detail":"{read_only:false; response_revision:2425; number_of_response:1; }","duration":"125.432558ms","start":"2024-07-29T20:23:12.584242Z","end":"2024-07-29T20:23:12.709674Z","steps":["trace[1507592919] 'process raft request'  (duration: 125.334901ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:23:37.516579Z","caller":"traceutil/trace.go:171","msg":"trace[364444095] transaction","detail":"{read_only:false; response_revision:2518; number_of_response:1; }","duration":"104.731185ms","start":"2024-07-29T20:23:37.411808Z","end":"2024-07-29T20:23:37.516539Z","steps":["trace[364444095] 'process raft request'  (duration: 104.594979ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:23:41.293401Z","caller":"traceutil/trace.go:171","msg":"trace[1439658240] transaction","detail":"{read_only:false; response_revision:2532; number_of_response:1; }","duration":"123.527902ms","start":"2024-07-29T20:23:41.169846Z","end":"2024-07-29T20:23:41.293374Z","steps":["trace[1439658240] 'process raft request'  (duration: 122.881491ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:23:48.058069Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fff3906243738b90 switched to configuration voters=(2894239873573755453 18443243650725153680)"}
	{"level":"info","ts":"2024-07-29T20:23:48.060067Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"3658928c14b8a733","local-member-id":"fff3906243738b90","removed-remote-peer-id":"57cb2df333d7b24","removed-remote-peer-urls":["https://192.168.39.53:2380"]}
	{"level":"info","ts":"2024-07-29T20:23:48.060133Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:48.061161Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:48.061249Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:48.061597Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:48.061675Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:48.06184Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:48.062061Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24","error":"context canceled"}
	{"level":"warn","ts":"2024-07-29T20:23:48.062111Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"57cb2df333d7b24","error":"failed to read 57cb2df333d7b24 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-07-29T20:23:48.062142Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:48.062529Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T20:23:48.062574Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:48.062601Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:23:48.062615Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"fff3906243738b90","removed-remote-peer-id":"57cb2df333d7b24"}
	{"level":"warn","ts":"2024-07-29T20:23:48.089949Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"fff3906243738b90","remote-peer-id-stream-handler":"fff3906243738b90","remote-peer-id-from":"57cb2df333d7b24"}
	
	
	==> etcd [a0e14d313861e66764c32120d2d8a5ba54d2a4f39ac69cc878f7bce1c6a5ea50] <==
	{"level":"warn","ts":"2024-07-29T20:19:40.299427Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"7.779446342s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" limit:500 ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-29T20:19:40.328477Z","caller":"traceutil/trace.go:171","msg":"trace[1283689381] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; }","duration":"7.808532729s","start":"2024-07-29T20:19:32.519937Z","end":"2024-07-29T20:19:40.32847Z","steps":["trace[1283689381] 'agreement among raft nodes before linearized reading'  (duration: 7.779453255s)"],"step_count":1}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T20:19:40.323908Z","caller":"traceutil/trace.go:171","msg":"trace[2141952898] range","detail":"{range_begin:/registry/leases/kube-system/apiserver-bw7ug3v3srytgg3i4du4ulxmvu; range_end:; }","duration":"7.813960958s","start":"2024-07-29T20:19:32.509127Z","end":"2024-07-29T20:19:40.323088Z","steps":["trace[2141952898] 'agreement among raft nodes before linearized reading'  (duration: 7.791389958s)"],"step_count":1}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-29T20:19:40.323186Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T20:19:31.448835Z","time spent":"8.874339609s","remote":"127.0.0.1:42658","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":0,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	2024/07/29 20:19:40 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2024-07-29T20:19:40.365497Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"fff3906243738b90","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-29T20:19:40.365683Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365738Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365795Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.365887Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.36597Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366057Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366098Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"282a67d4a7229a3d"}
	{"level":"info","ts":"2024-07-29T20:19:40.366108Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366117Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366154Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366245Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366293Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366345Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"fff3906243738b90","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.366378Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"57cb2df333d7b24"}
	{"level":"info","ts":"2024-07-29T20:19:40.369898Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-29T20:19:40.370094Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.238:2380"}
	{"level":"info","ts":"2024-07-29T20:19:40.370169Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-344518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.238:2380"],"advertise-client-urls":["https://192.168.39.238:2379"]}
	
	
	==> kernel <==
	 20:26:22 up 17 min,  0 users,  load average: 0.16, 0.40, 0.28
	Linux ha-344518 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [594577e4d332fe8c4821d9d7c841faa9b7d43a330747915b5616e6dbb579600f] <==
	I0729 20:19:16.996547       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:16.996553       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:16.996678       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:16.996730       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:16.996807       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:16.996826       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:19:27.005161       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:19:27.005272       1 main.go:299] handling current node
	I0729 20:19:27.005297       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:27.005303       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:27.005432       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:27.005484       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:27.005565       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:27.005584       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	E0729 20:19:34.014870       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=1876&timeout=7m9s&timeoutSeconds=429&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=7, ErrCode=NO_ERROR, debug=""
	I0729 20:19:36.996555       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:19:36.996595       1 main.go:299] handling current node
	I0729 20:19:36.996633       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:19:36.996643       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:19:36.996799       1 main.go:295] Handling node with IPs: map[192.168.39.53:{}]
	I0729 20:19:36.996805       1 main.go:322] Node ha-344518-m03 has CIDR [10.244.2.0/24] 
	I0729 20:19:36.996874       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:19:36.996880       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	W0729 20:19:37.086724       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1876": dial tcp 10.96.0.1:443: connect: no route to host
	E0729 20:19:37.087179       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?resourceVersion=1876": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> kindnet [80e938336fd3e7fbc1694fd82610e1664e54063f54a808bdec44e98b5ddfe3ec] <==
	I0729 20:25:36.625009       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:25:46.617368       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:25:46.617638       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:25:46.617863       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:25:46.617893       1 main.go:299] handling current node
	I0729 20:25:46.617941       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:25:46.617958       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:25:56.617262       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:25:56.617417       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:25:56.617587       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:25:56.617611       1 main.go:299] handling current node
	I0729 20:25:56.617639       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:25:56.617657       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:26:06.625349       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:26:06.625432       1 main.go:299] handling current node
	I0729 20:26:06.625455       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:26:06.625460       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:26:06.625601       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:26:06.625620       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	I0729 20:26:16.625431       1 main.go:295] Handling node with IPs: map[192.168.39.238:{}]
	I0729 20:26:16.625584       1 main.go:299] handling current node
	I0729 20:26:16.625620       1 main.go:295] Handling node with IPs: map[192.168.39.104:{}]
	I0729 20:26:16.625644       1 main.go:322] Node ha-344518-m02 has CIDR [10.244.1.0/24] 
	I0729 20:26:16.625864       1 main.go:295] Handling node with IPs: map[192.168.39.70:{}]
	I0729 20:26:16.625897       1 main.go:322] Node ha-344518-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [5882d9060c0d61bb94eab44893a0cb387e9034ecac4f2a1228531ab368fc8746] <==
	I0729 20:21:26.016306       1 options.go:221] external host was not specified, using 192.168.39.238
	I0729 20:21:26.017370       1 server.go:148] Version: v1.30.3
	I0729 20:21:26.017464       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:21:26.725787       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0729 20:21:26.743248       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:21:26.749867       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0729 20:21:26.749901       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0729 20:21:26.750076       1 instance.go:299] Using reconciler: lease
	W0729 20:21:46.725048       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0729 20:21:46.726523       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0729 20:21:46.753626       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [898a9f8b1999b5ef70ef31e3d508518770493bfcf282db7bf1a3f279f73aa889] <==
	I0729 20:22:05.615133       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0729 20:22:05.714879       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 20:22:05.719625       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 20:22:05.726059       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 20:22:05.715982       1 shared_informer.go:320] Caches are synced for configmaps
	W0729 20:22:05.737567       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.53]
	I0729 20:22:05.737689       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 20:22:05.737758       1 aggregator.go:165] initial CRD sync complete...
	I0729 20:22:05.737769       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 20:22:05.737774       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 20:22:05.737778       1 cache.go:39] Caches are synced for autoregister controller
	I0729 20:22:05.737887       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 20:22:05.772637       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 20:22:05.779026       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:22:05.779140       1 policy_source.go:224] refreshing policies
	I0729 20:22:05.815704       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 20:22:05.815805       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 20:22:05.815847       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 20:22:05.841060       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 20:22:05.849150       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0729 20:22:05.855373       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0729 20:22:06.621994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 20:22:06.971314       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.238 192.168.39.53]
	W0729 20:22:16.971771       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.238]
	W0729 20:24:06.977822       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.104 192.168.39.238]
	
	
	==> kube-controller-manager [6269dfd02a3c7cfdd496f304797388313ebc111d929c02148f9a281a4f6ef890] <==
	E0729 20:24:21.010031       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:21.010042       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:21.010049       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:21.010056       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	I0729 20:24:36.163724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.867729ms"
	I0729 20:24:36.163836       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.31µs"
	E0729 20:24:41.010422       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:41.010462       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:41.010469       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:41.010474       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	E0729 20:24:41.010479       1 gc_controller.go:153] "Failed to get node" err="node \"ha-344518-m03\" not found" logger="pod-garbage-collector-controller" node="ha-344518-m03"
	I0729 20:24:41.021995       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-344518-m03"
	I0729 20:24:41.047752       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-344518-m03"
	I0729 20:24:41.047793       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-344518-m03"
	I0729 20:24:41.081042       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-344518-m03"
	I0729 20:24:41.081091       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6qbz5"
	I0729 20:24:41.110846       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-6qbz5"
	I0729 20:24:41.110891       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-344518-m03"
	I0729 20:24:41.142006       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-344518-m03"
	I0729 20:24:41.142533       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-344518-m03"
	I0729 20:24:41.178140       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-344518-m03"
	I0729 20:24:41.178278       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s8wn5"
	I0729 20:24:41.202094       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s8wn5"
	I0729 20:24:41.202175       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-344518-m03"
	I0729 20:24:41.225971       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-344518-m03"
	
	
	==> kube-controller-manager [cab86b8020816100d094c1b48821b0f1df477e1f9e093030148b4ce3ffbe90d8] <==
	I0729 20:21:26.824746       1 serving.go:380] Generated self-signed cert in-memory
	I0729 20:21:27.272846       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0729 20:21:27.272885       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:21:27.274473       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 20:21:27.274592       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 20:21:27.274699       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 20:21:27.274823       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0729 20:21:47.760018       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.238:8443/healthz\": dial tcp 192.168.39.238:8443: connect: connection refused"
	
	
	==> kube-proxy [5bad9db5d08667b5f9b315895e9ad4805f194b40b7553a1d700418f8916ff52c] <==
	I0729 20:21:26.954833       1 server_linux.go:69] "Using iptables proxy"
	E0729 20:21:28.318723       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:31.391348       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:34.463817       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:40.608449       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	E0729 20:21:49.822754       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-344518\": dial tcp 192.168.39.254:8443: connect: no route to host"
	I0729 20:22:08.041705       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.238"]
	I0729 20:22:08.075776       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:22:08.075880       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:22:08.075921       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:22:08.078299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:22:08.078648       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:22:08.078955       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:22:08.081633       1 config.go:192] "Starting service config controller"
	I0729 20:22:08.081690       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:22:08.081740       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:22:08.081760       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:22:08.083616       1 config.go:319] "Starting node config controller"
	I0729 20:22:08.083637       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:22:08.182163       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:22:08.182168       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:22:08.183758       1 shared_informer.go:320] Caches are synced for node config
	W0729 20:24:51.994182       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 20:24:51.994184       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0729 20:24:51.994299       1 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [d79e4f49251f63cd06122266c3ce0baf54ffec1faf53b1f354bb9e0f94ec5454] <==
	E0729 20:18:30.143655       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.215944       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216090       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.216286       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:33.215829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:33.216453       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.359899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.359997       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.360516       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.360595       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:39.360634       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:39.360690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.575761       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.575888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.575998       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.575915       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:18:48.576117       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:18:48.576233       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:07.007147       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:07.007282       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-344518&resourceVersion=1931": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:07.007420       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:07.007475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1871": dial tcp 192.168.39.254:8443: connect: no route to host
	W0729 20:19:13.151371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	E0729 20:19:13.151692       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1845": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [1121b90510c219ddec9dfe67f669354e0cf03f3266548d329b2358d3988bb0be] <==
	E0729 20:19:36.004049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:19:36.005041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 20:19:36.005064       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0729 20:19:36.168958       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 20:19:36.169057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 20:19:36.271753       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:19:36.271856       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:19:36.347442       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:19:36.347488       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:19:36.571475       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:19:36.571571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0729 20:19:36.625585       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:19:36.625629       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:19:36.932558       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 20:19:36.932594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0729 20:19:39.082794       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:19:39.082845       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:19:39.467510       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 20:19:39.467556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0729 20:19:39.901570       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:19:39.901682       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:19:40.216643       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:19:40.216680       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0729 20:19:40.269585       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 20:19:40.269766       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c89a9f7056c1b235c762c9454796c713900a9d4ec9575b84e0e54a9dfbf600e3] <==
	W0729 20:21:57.973291       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:21:57.973341       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.238:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:21:58.359729       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:21:58.359787       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.238:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:22:01.001829       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	E0729 20:22:01.002015       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.238:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.238:8443: connect: connection refused
	W0729 20:22:05.667403       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.667455       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.680616       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:22:05.680668       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:22:05.680734       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.680766       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.680824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:22:05.680852       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:22:05.680640       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:22:05.680979       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 20:22:05.704964       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:22:05.705105       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:22:05.708608       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:22:05.708729       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0729 20:22:05.715613       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:22:05.715743       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:22:05.716006       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 20:22:05.716047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0729 20:22:22.969722       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:22:43 ha-344518 kubelet[1384]: E0729 20:22:43.652454    1384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9e8bd9d2-8adf-47de-8e32-05d64002a631)\"" pod="kube-system/storage-provisioner" podUID="9e8bd9d2-8adf-47de-8e32-05d64002a631"
	Jul 29 20:22:47 ha-344518 kubelet[1384]: E0729 20:22:47.711917    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:22:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:22:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:22:54 ha-344518 kubelet[1384]: I0729 20:22:54.652101    1384 kubelet.go:1917] "Trying to delete pod" pod="kube-system/kube-vip-ha-344518" podUID="140d2a2f-c461-421e-9b01-a5e6d7f2b9f8"
	Jul 29 20:22:54 ha-344518 kubelet[1384]: I0729 20:22:54.669675    1384 kubelet.go:1922] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-344518"
	Jul 29 20:22:55 ha-344518 kubelet[1384]: I0729 20:22:55.652376    1384 scope.go:117] "RemoveContainer" containerID="c18f890b02c28d05903c6394651080defb961397049bd490d97c8f2e0a2f49f1"
	Jul 29 20:22:56 ha-344518 kubelet[1384]: I0729 20:22:56.487611    1384 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-344518" podStartSLOduration=2.487579776 podStartE2EDuration="2.487579776s" podCreationTimestamp="2024-07-29 20:22:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-29 20:22:56.472300686 +0000 UTC m=+788.978261571" watchObservedRunningTime="2024-07-29 20:22:56.487579776 +0000 UTC m=+788.993540660"
	Jul 29 20:23:47 ha-344518 kubelet[1384]: E0729 20:23:47.712359    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:23:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:23:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:23:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:23:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:24:47 ha-344518 kubelet[1384]: E0729 20:24:47.710291    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:24:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:24:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:24:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:24:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:25:47 ha-344518 kubelet[1384]: E0729 20:25:47.709302    1384 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:25:47 ha-344518 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:25:47 ha-344518 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:25:47 ha-344518 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:25:47 ha-344518 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 20:26:21.220531  764891 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19344-733808/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-344518 -n ha-344518
helpers_test.go:261: (dbg) Run:  kubectl --context ha-344518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (327.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151054
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-151054
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-151054: exit status 82 (2m1.713601784s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-151054-m03"  ...
	* Stopping node "multinode-151054-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-151054" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151054 --wait=true -v=8 --alsologtostderr
E0729 20:43:14.091793  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:46:17.138383  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151054 --wait=true -v=8 --alsologtostderr: (3m23.341754676s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151054
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-151054 -n multinode-151054
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-151054 logs -n 25: (1.428495409s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054:/home/docker/cp-test_multinode-151054-m02_multinode-151054.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054 sudo cat                                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m02_multinode-151054.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03:/home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054-m03 sudo cat                                   | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp testdata/cp-test.txt                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054:/home/docker/cp-test_multinode-151054-m03_multinode-151054.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054 sudo cat                                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02:/home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054-m02 sudo cat                                   | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-151054 node stop m03                                                          | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	| node    | multinode-151054 node start                                                             | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| stop    | -p multinode-151054                                                                     | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| start   | -p multinode-151054                                                                     | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:43 UTC | 29 Jul 24 20:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:43:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:43:02.610225  774167 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:43:02.610356  774167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:43:02.610364  774167 out.go:304] Setting ErrFile to fd 2...
	I0729 20:43:02.610369  774167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:43:02.610564  774167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:43:02.611117  774167 out.go:298] Setting JSON to false
	I0729 20:43:02.612200  774167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":15930,"bootTime":1722269853,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:43:02.612261  774167 start.go:139] virtualization: kvm guest
	I0729 20:43:02.619537  774167 out.go:177] * [multinode-151054] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:43:02.623637  774167 notify.go:220] Checking for updates...
	I0729 20:43:02.623658  774167 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:43:02.625559  774167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:43:02.627166  774167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:43:02.628706  774167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:43:02.630067  774167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:43:02.631355  774167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:43:02.633182  774167 config.go:182] Loaded profile config "multinode-151054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:43:02.633305  774167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:43:02.633921  774167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:43:02.633971  774167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:43:02.650780  774167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37449
	I0729 20:43:02.651289  774167 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:43:02.651882  774167 main.go:141] libmachine: Using API Version  1
	I0729 20:43:02.651905  774167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:43:02.652266  774167 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:43:02.652601  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.687511  774167 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:43:02.688747  774167 start.go:297] selected driver: kvm2
	I0729 20:43:02.688761  774167 start.go:901] validating driver "kvm2" against &{Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:43:02.688900  774167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:43:02.689212  774167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:43:02.689283  774167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:43:02.704645  774167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:43:02.705387  774167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:43:02.705447  774167 cni.go:84] Creating CNI manager for ""
	I0729 20:43:02.705459  774167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 20:43:02.705536  774167 start.go:340] cluster config:
	{Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:43:02.705680  774167 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:43:02.707371  774167 out.go:177] * Starting "multinode-151054" primary control-plane node in "multinode-151054" cluster
	I0729 20:43:02.708457  774167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:43:02.708504  774167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:43:02.708520  774167 cache.go:56] Caching tarball of preloaded images
	I0729 20:43:02.708638  774167 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:43:02.708649  774167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:43:02.708764  774167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/config.json ...
	I0729 20:43:02.708951  774167 start.go:360] acquireMachinesLock for multinode-151054: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:43:02.708993  774167 start.go:364] duration metric: took 24.488µs to acquireMachinesLock for "multinode-151054"
	I0729 20:43:02.709007  774167 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:43:02.709018  774167 fix.go:54] fixHost starting: 
	I0729 20:43:02.709280  774167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:43:02.709314  774167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:43:02.723593  774167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0729 20:43:02.724077  774167 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:43:02.724538  774167 main.go:141] libmachine: Using API Version  1
	I0729 20:43:02.724561  774167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:43:02.724916  774167 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:43:02.725074  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.725205  774167 main.go:141] libmachine: (multinode-151054) Calling .GetState
	I0729 20:43:02.726862  774167 fix.go:112] recreateIfNeeded on multinode-151054: state=Running err=<nil>
	W0729 20:43:02.726885  774167 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:43:02.729475  774167 out.go:177] * Updating the running kvm2 "multinode-151054" VM ...
	I0729 20:43:02.730898  774167 machine.go:94] provisionDockerMachine start ...
	I0729 20:43:02.730925  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.731154  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.733874  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.734442  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.734474  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.734643  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.734836  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.735035  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.735203  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.735399  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.735610  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.735632  774167 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:43:02.840723  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151054
	
	I0729 20:43:02.840753  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:02.841014  774167 buildroot.go:166] provisioning hostname "multinode-151054"
	I0729 20:43:02.841048  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:02.841262  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.844377  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.844853  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.844876  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.844996  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.845192  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.845352  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.845497  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.845713  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.845930  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.845944  774167 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-151054 && echo "multinode-151054" | sudo tee /etc/hostname
	I0729 20:43:02.962902  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151054
	
	I0729 20:43:02.962949  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.965916  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.966262  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.966292  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.966439  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.966657  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.966832  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.966971  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.967202  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.967394  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.967410  774167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-151054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-151054/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-151054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:43:03.072562  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:43:03.072599  774167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:43:03.072624  774167 buildroot.go:174] setting up certificates
	I0729 20:43:03.072636  774167 provision.go:84] configureAuth start
	I0729 20:43:03.072646  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:03.072990  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:43:03.075839  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.076290  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.076311  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.076453  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.078711  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.079005  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.079037  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.079153  774167 provision.go:143] copyHostCerts
	I0729 20:43:03.079178  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:43:03.079221  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:43:03.079230  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:43:03.079295  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:43:03.079387  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:43:03.079404  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:43:03.079410  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:43:03.079437  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:43:03.079512  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:43:03.079537  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:43:03.079551  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:43:03.079592  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:43:03.079664  774167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.multinode-151054 san=[127.0.0.1 192.168.39.229 localhost minikube multinode-151054]
	I0729 20:43:03.381435  774167 provision.go:177] copyRemoteCerts
	I0729 20:43:03.381521  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:43:03.381547  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.384305  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.384740  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.384769  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.384961  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:03.385190  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.385385  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:03.385547  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:43:03.470639  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:43:03.470714  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:43:03.498536  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:43:03.498628  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 20:43:03.528214  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:43:03.528281  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 20:43:03.553922  774167 provision.go:87] duration metric: took 481.273908ms to configureAuth
	I0729 20:43:03.553955  774167 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:43:03.554229  774167 config.go:182] Loaded profile config "multinode-151054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:43:03.554324  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.557296  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.557872  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.557902  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.558132  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:03.558353  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.558606  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.558777  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:03.558947  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:03.559127  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:03.559141  774167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:44:34.196887  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:44:34.196929  774167 machine.go:97] duration metric: took 1m31.466012352s to provisionDockerMachine
	I0729 20:44:34.196953  774167 start.go:293] postStartSetup for "multinode-151054" (driver="kvm2")
	I0729 20:44:34.196970  774167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:44:34.197004  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.197386  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:44:34.197420  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.200885  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.201467  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.201499  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.201671  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.201863  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.202027  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.202147  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.286838  774167 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:44:34.290584  774167 command_runner.go:130] > NAME=Buildroot
	I0729 20:44:34.290600  774167 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 20:44:34.290604  774167 command_runner.go:130] > ID=buildroot
	I0729 20:44:34.290608  774167 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 20:44:34.290613  774167 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 20:44:34.290765  774167 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:44:34.290795  774167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:44:34.290851  774167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:44:34.290934  774167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:44:34.290947  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:44:34.291037  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:44:34.299813  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:44:34.323012  774167 start.go:296] duration metric: took 126.042002ms for postStartSetup
	I0729 20:44:34.323057  774167 fix.go:56] duration metric: took 1m31.614040115s for fixHost
	I0729 20:44:34.323089  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.326334  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.326802  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.326835  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.326984  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.327163  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.327321  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.327482  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.327660  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:44:34.327890  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:44:34.327908  774167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:44:34.432801  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722285874.405004197
	
	I0729 20:44:34.432837  774167 fix.go:216] guest clock: 1722285874.405004197
	I0729 20:44:34.432847  774167 fix.go:229] Guest: 2024-07-29 20:44:34.405004197 +0000 UTC Remote: 2024-07-29 20:44:34.323067196 +0000 UTC m=+91.749714022 (delta=81.937001ms)
	I0729 20:44:34.432894  774167 fix.go:200] guest clock delta is within tolerance: 81.937001ms
	I0729 20:44:34.432903  774167 start.go:83] releasing machines lock for "multinode-151054", held for 1m31.723900503s
	I0729 20:44:34.432928  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.433187  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:44:34.435972  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.436486  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.436524  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.436710  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437295  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437511  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437611  774167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:44:34.437711  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.437737  774167 ssh_runner.go:195] Run: cat /version.json
	I0729 20:44:34.437757  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.440262  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440527  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440657  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.440684  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440846  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.440991  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.440997  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.441016  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.441190  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.441198  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.441401  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.441399  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.441551  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.441672  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.517028  774167 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 20:44:34.517297  774167 ssh_runner.go:195] Run: systemctl --version
	I0729 20:44:34.556555  774167 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 20:44:34.557204  774167 command_runner.go:130] > systemd 252 (252)
	I0729 20:44:34.557253  774167 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 20:44:34.557328  774167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:44:34.709833  774167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 20:44:34.719820  774167 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 20:44:34.719905  774167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:44:34.719971  774167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:44:34.729126  774167 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 20:44:34.729150  774167 start.go:495] detecting cgroup driver to use...
	I0729 20:44:34.729215  774167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:44:34.744636  774167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:44:34.758610  774167 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:44:34.758670  774167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:44:34.771781  774167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:44:34.785262  774167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:44:34.934887  774167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:44:35.070496  774167 docker.go:232] disabling docker service ...
	I0729 20:44:35.070565  774167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:44:35.086493  774167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:44:35.099060  774167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:44:35.233546  774167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:44:35.367361  774167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:44:35.380659  774167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:44:35.397592  774167 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 20:44:35.398219  774167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:44:35.398318  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.408682  774167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:44:35.408753  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.419315  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.429424  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.439428  774167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:44:35.449458  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.459285  774167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.469564  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.479874  774167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:44:35.488711  774167 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 20:44:35.488893  774167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:44:35.497577  774167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:44:35.633345  774167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:44:37.067497  774167 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.434111954s)
	I0729 20:44:37.067526  774167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:44:37.067588  774167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:44:37.072154  774167 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 20:44:37.072178  774167 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 20:44:37.072184  774167 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0729 20:44:37.072191  774167 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 20:44:37.072199  774167 command_runner.go:130] > Access: 2024-07-29 20:44:37.008042760 +0000
	I0729 20:44:37.072207  774167 command_runner.go:130] > Modify: 2024-07-29 20:44:36.931040525 +0000
	I0729 20:44:37.072216  774167 command_runner.go:130] > Change: 2024-07-29 20:44:36.931040525 +0000
	I0729 20:44:37.072239  774167 command_runner.go:130] >  Birth: -
	I0729 20:44:37.072261  774167 start.go:563] Will wait 60s for crictl version
	I0729 20:44:37.072319  774167 ssh_runner.go:195] Run: which crictl
	I0729 20:44:37.075966  774167 command_runner.go:130] > /usr/bin/crictl
	I0729 20:44:37.076075  774167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:44:37.109888  774167 command_runner.go:130] > Version:  0.1.0
	I0729 20:44:37.109916  774167 command_runner.go:130] > RuntimeName:  cri-o
	I0729 20:44:37.109921  774167 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 20:44:37.109927  774167 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 20:44:37.110875  774167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:44:37.110940  774167 ssh_runner.go:195] Run: crio --version
	I0729 20:44:37.138372  774167 command_runner.go:130] > crio version 1.29.1
	I0729 20:44:37.138398  774167 command_runner.go:130] > Version:        1.29.1
	I0729 20:44:37.138406  774167 command_runner.go:130] > GitCommit:      unknown
	I0729 20:44:37.138412  774167 command_runner.go:130] > GitCommitDate:  unknown
	I0729 20:44:37.138417  774167 command_runner.go:130] > GitTreeState:   clean
	I0729 20:44:37.138424  774167 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 20:44:37.138430  774167 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 20:44:37.138436  774167 command_runner.go:130] > Compiler:       gc
	I0729 20:44:37.138446  774167 command_runner.go:130] > Platform:       linux/amd64
	I0729 20:44:37.138452  774167 command_runner.go:130] > Linkmode:       dynamic
	I0729 20:44:37.138458  774167 command_runner.go:130] > BuildTags:      
	I0729 20:44:37.138465  774167 command_runner.go:130] >   containers_image_ostree_stub
	I0729 20:44:37.138469  774167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 20:44:37.138475  774167 command_runner.go:130] >   btrfs_noversion
	I0729 20:44:37.138480  774167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 20:44:37.138487  774167 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 20:44:37.138490  774167 command_runner.go:130] >   seccomp
	I0729 20:44:37.138494  774167 command_runner.go:130] > LDFlags:          unknown
	I0729 20:44:37.138500  774167 command_runner.go:130] > SeccompEnabled:   true
	I0729 20:44:37.138505  774167 command_runner.go:130] > AppArmorEnabled:  false
	I0729 20:44:37.138591  774167 ssh_runner.go:195] Run: crio --version
	I0729 20:44:37.164573  774167 command_runner.go:130] > crio version 1.29.1
	I0729 20:44:37.164596  774167 command_runner.go:130] > Version:        1.29.1
	I0729 20:44:37.164603  774167 command_runner.go:130] > GitCommit:      unknown
	I0729 20:44:37.164607  774167 command_runner.go:130] > GitCommitDate:  unknown
	I0729 20:44:37.164611  774167 command_runner.go:130] > GitTreeState:   clean
	I0729 20:44:37.164619  774167 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 20:44:37.164626  774167 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 20:44:37.164633  774167 command_runner.go:130] > Compiler:       gc
	I0729 20:44:37.164642  774167 command_runner.go:130] > Platform:       linux/amd64
	I0729 20:44:37.164648  774167 command_runner.go:130] > Linkmode:       dynamic
	I0729 20:44:37.164653  774167 command_runner.go:130] > BuildTags:      
	I0729 20:44:37.164658  774167 command_runner.go:130] >   containers_image_ostree_stub
	I0729 20:44:37.164671  774167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 20:44:37.164678  774167 command_runner.go:130] >   btrfs_noversion
	I0729 20:44:37.164682  774167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 20:44:37.164689  774167 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 20:44:37.164693  774167 command_runner.go:130] >   seccomp
	I0729 20:44:37.164699  774167 command_runner.go:130] > LDFlags:          unknown
	I0729 20:44:37.164704  774167 command_runner.go:130] > SeccompEnabled:   true
	I0729 20:44:37.164714  774167 command_runner.go:130] > AppArmorEnabled:  false
	I0729 20:44:37.167817  774167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:44:37.169152  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:44:37.171904  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:37.172282  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:37.172307  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:37.172486  774167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:44:37.176504  774167 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 20:44:37.176692  774167 kubeadm.go:883] updating cluster {Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:44:37.176827  774167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:44:37.176886  774167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:44:37.226364  774167 command_runner.go:130] > {
	I0729 20:44:37.226391  774167 command_runner.go:130] >   "images": [
	I0729 20:44:37.226395  774167 command_runner.go:130] >     {
	I0729 20:44:37.226404  774167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 20:44:37.226409  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226416  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 20:44:37.226422  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226426  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226439  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 20:44:37.226451  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 20:44:37.226457  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226468  774167 command_runner.go:130] >       "size": "87165492",
	I0729 20:44:37.226475  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226482  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226495  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226499  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226503  774167 command_runner.go:130] >     },
	I0729 20:44:37.226507  774167 command_runner.go:130] >     {
	I0729 20:44:37.226513  774167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 20:44:37.226522  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226532  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 20:44:37.226541  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226548  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226562  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 20:44:37.226581  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 20:44:37.226589  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226593  774167 command_runner.go:130] >       "size": "87174707",
	I0729 20:44:37.226600  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226614  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226626  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226635  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226641  774167 command_runner.go:130] >     },
	I0729 20:44:37.226649  774167 command_runner.go:130] >     {
	I0729 20:44:37.226659  774167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 20:44:37.226669  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226678  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 20:44:37.226684  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226691  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226706  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 20:44:37.226720  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 20:44:37.226729  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226736  774167 command_runner.go:130] >       "size": "1363676",
	I0729 20:44:37.226744  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226754  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226761  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226769  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226775  774167 command_runner.go:130] >     },
	I0729 20:44:37.226783  774167 command_runner.go:130] >     {
	I0729 20:44:37.226796  774167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 20:44:37.226805  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226815  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 20:44:37.226824  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226833  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226945  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 20:44:37.226989  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 20:44:37.227000  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227081  774167 command_runner.go:130] >       "size": "31470524",
	I0729 20:44:37.227105  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227118  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227127  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227136  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227145  774167 command_runner.go:130] >     },
	I0729 20:44:37.227153  774167 command_runner.go:130] >     {
	I0729 20:44:37.227164  774167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 20:44:37.227173  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227187  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 20:44:37.227197  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227204  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227218  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 20:44:37.227233  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 20:44:37.227241  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227249  774167 command_runner.go:130] >       "size": "61245718",
	I0729 20:44:37.227254  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227260  774167 command_runner.go:130] >       "username": "nonroot",
	I0729 20:44:37.227269  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227279  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227285  774167 command_runner.go:130] >     },
	I0729 20:44:37.227294  774167 command_runner.go:130] >     {
	I0729 20:44:37.227326  774167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 20:44:37.227334  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227342  774167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 20:44:37.227349  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227363  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227378  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 20:44:37.227408  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 20:44:37.227416  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227425  774167 command_runner.go:130] >       "size": "150779692",
	I0729 20:44:37.227430  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227440  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227449  774167 command_runner.go:130] >       },
	I0729 20:44:37.227458  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227465  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227474  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227483  774167 command_runner.go:130] >     },
	I0729 20:44:37.227491  774167 command_runner.go:130] >     {
	I0729 20:44:37.227503  774167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 20:44:37.227511  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227518  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 20:44:37.227526  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227536  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227552  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 20:44:37.227564  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 20:44:37.227569  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227575  774167 command_runner.go:130] >       "size": "117609954",
	I0729 20:44:37.227581  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227587  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227593  774167 command_runner.go:130] >       },
	I0729 20:44:37.227601  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227607  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227614  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227623  774167 command_runner.go:130] >     },
	I0729 20:44:37.227630  774167 command_runner.go:130] >     {
	I0729 20:44:37.227643  774167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 20:44:37.227650  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227662  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 20:44:37.227671  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227679  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227706  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 20:44:37.227725  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 20:44:37.227733  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227742  774167 command_runner.go:130] >       "size": "112198984",
	I0729 20:44:37.227750  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227756  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227761  774167 command_runner.go:130] >       },
	I0729 20:44:37.227766  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227771  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227776  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227782  774167 command_runner.go:130] >     },
	I0729 20:44:37.227788  774167 command_runner.go:130] >     {
	I0729 20:44:37.227798  774167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 20:44:37.227805  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227815  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 20:44:37.227821  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227828  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227840  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 20:44:37.227852  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 20:44:37.227859  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227867  774167 command_runner.go:130] >       "size": "85953945",
	I0729 20:44:37.227874  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227880  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227888  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227898  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227904  774167 command_runner.go:130] >     },
	I0729 20:44:37.227913  774167 command_runner.go:130] >     {
	I0729 20:44:37.227925  774167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 20:44:37.227933  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227943  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 20:44:37.227951  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227959  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227974  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 20:44:37.227989  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 20:44:37.227998  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228005  774167 command_runner.go:130] >       "size": "63051080",
	I0729 20:44:37.228014  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.228020  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.228027  774167 command_runner.go:130] >       },
	I0729 20:44:37.228049  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.228057  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.228067  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.228073  774167 command_runner.go:130] >     },
	I0729 20:44:37.228079  774167 command_runner.go:130] >     {
	I0729 20:44:37.228093  774167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 20:44:37.228103  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.228113  774167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 20:44:37.228121  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228131  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.228144  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 20:44:37.228157  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 20:44:37.228167  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228175  774167 command_runner.go:130] >       "size": "750414",
	I0729 20:44:37.228184  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.228192  774167 command_runner.go:130] >         "value": "65535"
	I0729 20:44:37.228198  774167 command_runner.go:130] >       },
	I0729 20:44:37.228217  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.228227  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.228236  774167 command_runner.go:130] >       "pinned": true
	I0729 20:44:37.228242  774167 command_runner.go:130] >     }
	I0729 20:44:37.228250  774167 command_runner.go:130] >   ]
	I0729 20:44:37.228254  774167 command_runner.go:130] > }
	I0729 20:44:37.228475  774167 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:44:37.228489  774167 crio.go:433] Images already preloaded, skipping extraction
	I0729 20:44:37.228548  774167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:44:37.260712  774167 command_runner.go:130] > {
	I0729 20:44:37.260745  774167 command_runner.go:130] >   "images": [
	I0729 20:44:37.260751  774167 command_runner.go:130] >     {
	I0729 20:44:37.260764  774167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 20:44:37.260772  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260779  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 20:44:37.260782  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260786  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.260796  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 20:44:37.260803  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 20:44:37.260810  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260814  774167 command_runner.go:130] >       "size": "87165492",
	I0729 20:44:37.260818  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.260822  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.260830  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.260835  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.260857  774167 command_runner.go:130] >     },
	I0729 20:44:37.260867  774167 command_runner.go:130] >     {
	I0729 20:44:37.260876  774167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 20:44:37.260881  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260889  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 20:44:37.260898  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260905  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.260917  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 20:44:37.260932  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 20:44:37.260940  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260945  774167 command_runner.go:130] >       "size": "87174707",
	I0729 20:44:37.260951  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.260957  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.260963  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.260967  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.260973  774167 command_runner.go:130] >     },
	I0729 20:44:37.260976  774167 command_runner.go:130] >     {
	I0729 20:44:37.260984  774167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 20:44:37.260988  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260997  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 20:44:37.261003  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261007  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261016  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 20:44:37.261025  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 20:44:37.261030  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261035  774167 command_runner.go:130] >       "size": "1363676",
	I0729 20:44:37.261040  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261044  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261053  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261061  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261065  774167 command_runner.go:130] >     },
	I0729 20:44:37.261068  774167 command_runner.go:130] >     {
	I0729 20:44:37.261076  774167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 20:44:37.261081  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261086  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 20:44:37.261092  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261099  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261108  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 20:44:37.261122  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 20:44:37.261128  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261133  774167 command_runner.go:130] >       "size": "31470524",
	I0729 20:44:37.261139  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261143  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261149  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261153  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261158  774167 command_runner.go:130] >     },
	I0729 20:44:37.261162  774167 command_runner.go:130] >     {
	I0729 20:44:37.261168  774167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 20:44:37.261174  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261179  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 20:44:37.261185  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261189  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261198  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 20:44:37.261207  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 20:44:37.261216  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261222  774167 command_runner.go:130] >       "size": "61245718",
	I0729 20:44:37.261226  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261231  774167 command_runner.go:130] >       "username": "nonroot",
	I0729 20:44:37.261235  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261241  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261245  774167 command_runner.go:130] >     },
	I0729 20:44:37.261250  774167 command_runner.go:130] >     {
	I0729 20:44:37.261255  774167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 20:44:37.261262  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261266  774167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 20:44:37.261271  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261275  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261284  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 20:44:37.261290  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 20:44:37.261296  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261300  774167 command_runner.go:130] >       "size": "150779692",
	I0729 20:44:37.261306  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261311  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261319  774167 command_runner.go:130] >       },
	I0729 20:44:37.261323  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261329  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261333  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261338  774167 command_runner.go:130] >     },
	I0729 20:44:37.261342  774167 command_runner.go:130] >     {
	I0729 20:44:37.261350  774167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 20:44:37.261355  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261360  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 20:44:37.261365  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261369  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261379  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 20:44:37.261388  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 20:44:37.261393  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261397  774167 command_runner.go:130] >       "size": "117609954",
	I0729 20:44:37.261402  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261406  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261412  774167 command_runner.go:130] >       },
	I0729 20:44:37.261416  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261422  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261426  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261431  774167 command_runner.go:130] >     },
	I0729 20:44:37.261435  774167 command_runner.go:130] >     {
	I0729 20:44:37.261442  774167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 20:44:37.261449  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261454  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 20:44:37.261460  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261463  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261483  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 20:44:37.261493  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 20:44:37.261496  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261500  774167 command_runner.go:130] >       "size": "112198984",
	I0729 20:44:37.261504  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261510  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261514  774167 command_runner.go:130] >       },
	I0729 20:44:37.261524  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261531  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261536  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261541  774167 command_runner.go:130] >     },
	I0729 20:44:37.261545  774167 command_runner.go:130] >     {
	I0729 20:44:37.261550  774167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 20:44:37.261553  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261569  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 20:44:37.261575  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261579  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261588  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 20:44:37.261599  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 20:44:37.261605  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261610  774167 command_runner.go:130] >       "size": "85953945",
	I0729 20:44:37.261616  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261620  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261625  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261629  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261636  774167 command_runner.go:130] >     },
	I0729 20:44:37.261640  774167 command_runner.go:130] >     {
	I0729 20:44:37.261646  774167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 20:44:37.261652  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261656  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 20:44:37.261662  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261665  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261676  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 20:44:37.261685  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 20:44:37.261691  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261695  774167 command_runner.go:130] >       "size": "63051080",
	I0729 20:44:37.261701  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261705  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261710  774167 command_runner.go:130] >       },
	I0729 20:44:37.261714  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261720  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261724  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261729  774167 command_runner.go:130] >     },
	I0729 20:44:37.261734  774167 command_runner.go:130] >     {
	I0729 20:44:37.261742  774167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 20:44:37.261748  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261753  774167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 20:44:37.261758  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261762  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261770  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 20:44:37.261778  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 20:44:37.261784  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261788  774167 command_runner.go:130] >       "size": "750414",
	I0729 20:44:37.261794  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261798  774167 command_runner.go:130] >         "value": "65535"
	I0729 20:44:37.261803  774167 command_runner.go:130] >       },
	I0729 20:44:37.261807  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261811  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261816  774167 command_runner.go:130] >       "pinned": true
	I0729 20:44:37.261820  774167 command_runner.go:130] >     }
	I0729 20:44:37.261823  774167 command_runner.go:130] >   ]
	I0729 20:44:37.261827  774167 command_runner.go:130] > }
	I0729 20:44:37.261960  774167 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:44:37.261975  774167 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:44:37.261983  774167 kubeadm.go:934] updating node { 192.168.39.229 8443 v1.30.3 crio true true} ...
	I0729 20:44:37.262100  774167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-151054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:44:37.262172  774167 ssh_runner.go:195] Run: crio config
	I0729 20:44:37.302846  774167 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 20:44:37.302883  774167 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 20:44:37.302894  774167 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 20:44:37.302899  774167 command_runner.go:130] > #
	I0729 20:44:37.302909  774167 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 20:44:37.302919  774167 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 20:44:37.302928  774167 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 20:44:37.302953  774167 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 20:44:37.302965  774167 command_runner.go:130] > # reload'.
	I0729 20:44:37.302974  774167 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 20:44:37.302983  774167 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 20:44:37.302994  774167 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 20:44:37.303005  774167 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 20:44:37.303014  774167 command_runner.go:130] > [crio]
	I0729 20:44:37.303023  774167 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 20:44:37.303034  774167 command_runner.go:130] > # containers images, in this directory.
	I0729 20:44:37.303047  774167 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 20:44:37.303061  774167 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 20:44:37.303152  774167 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 20:44:37.303178  774167 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 20:44:37.303408  774167 command_runner.go:130] > # imagestore = ""
	I0729 20:44:37.303432  774167 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 20:44:37.303441  774167 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 20:44:37.303513  774167 command_runner.go:130] > storage_driver = "overlay"
	I0729 20:44:37.303534  774167 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 20:44:37.303545  774167 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 20:44:37.303552  774167 command_runner.go:130] > storage_option = [
	I0729 20:44:37.303669  774167 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 20:44:37.303686  774167 command_runner.go:130] > ]
	I0729 20:44:37.303697  774167 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 20:44:37.303715  774167 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 20:44:37.303995  774167 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 20:44:37.304008  774167 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 20:44:37.304017  774167 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 20:44:37.304024  774167 command_runner.go:130] > # always happen on a node reboot
	I0729 20:44:37.304241  774167 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 20:44:37.304272  774167 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 20:44:37.304284  774167 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 20:44:37.304289  774167 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 20:44:37.304351  774167 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 20:44:37.304369  774167 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 20:44:37.304382  774167 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 20:44:37.304596  774167 command_runner.go:130] > # internal_wipe = true
	I0729 20:44:37.304620  774167 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 20:44:37.304631  774167 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 20:44:37.304834  774167 command_runner.go:130] > # internal_repair = false
	I0729 20:44:37.304856  774167 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 20:44:37.304865  774167 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 20:44:37.304873  774167 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 20:44:37.305067  774167 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 20:44:37.305079  774167 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 20:44:37.305085  774167 command_runner.go:130] > [crio.api]
	I0729 20:44:37.305093  774167 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 20:44:37.305310  774167 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 20:44:37.305323  774167 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 20:44:37.305527  774167 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 20:44:37.305543  774167 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 20:44:37.305556  774167 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 20:44:37.305742  774167 command_runner.go:130] > # stream_port = "0"
	I0729 20:44:37.305754  774167 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 20:44:37.306000  774167 command_runner.go:130] > # stream_enable_tls = false
	I0729 20:44:37.306014  774167 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 20:44:37.306161  774167 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 20:44:37.306173  774167 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 20:44:37.306182  774167 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 20:44:37.306189  774167 command_runner.go:130] > # minutes.
	I0729 20:44:37.306371  774167 command_runner.go:130] > # stream_tls_cert = ""
	I0729 20:44:37.306402  774167 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 20:44:37.306416  774167 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 20:44:37.306516  774167 command_runner.go:130] > # stream_tls_key = ""
	I0729 20:44:37.306529  774167 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 20:44:37.306543  774167 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 20:44:37.306570  774167 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 20:44:37.306841  774167 command_runner.go:130] > # stream_tls_ca = ""
	I0729 20:44:37.306864  774167 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 20:44:37.306872  774167 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 20:44:37.306884  774167 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 20:44:37.306896  774167 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 20:44:37.306906  774167 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 20:44:37.306917  774167 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 20:44:37.306925  774167 command_runner.go:130] > [crio.runtime]
	I0729 20:44:37.306936  774167 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 20:44:37.306947  774167 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 20:44:37.306959  774167 command_runner.go:130] > # "nofile=1024:2048"
	I0729 20:44:37.306969  774167 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 20:44:37.307027  774167 command_runner.go:130] > # default_ulimits = [
	I0729 20:44:37.307134  774167 command_runner.go:130] > # ]
	I0729 20:44:37.307151  774167 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 20:44:37.307409  774167 command_runner.go:130] > # no_pivot = false
	I0729 20:44:37.307423  774167 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 20:44:37.307432  774167 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 20:44:37.307853  774167 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 20:44:37.307869  774167 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 20:44:37.307877  774167 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 20:44:37.307891  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 20:44:37.308152  774167 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 20:44:37.308164  774167 command_runner.go:130] > # Cgroup setting for conmon
	I0729 20:44:37.308175  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 20:44:37.308848  774167 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 20:44:37.308866  774167 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 20:44:37.308874  774167 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 20:44:37.308884  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 20:44:37.308894  774167 command_runner.go:130] > conmon_env = [
	I0729 20:44:37.308988  774167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 20:44:37.309052  774167 command_runner.go:130] > ]
	I0729 20:44:37.309066  774167 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 20:44:37.309074  774167 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 20:44:37.309082  774167 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 20:44:37.309145  774167 command_runner.go:130] > # default_env = [
	I0729 20:44:37.309255  774167 command_runner.go:130] > # ]
	I0729 20:44:37.309268  774167 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 20:44:37.309280  774167 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 20:44:37.309497  774167 command_runner.go:130] > # selinux = false
	I0729 20:44:37.309511  774167 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 20:44:37.309520  774167 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 20:44:37.309529  774167 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 20:44:37.309673  774167 command_runner.go:130] > # seccomp_profile = ""
	I0729 20:44:37.309685  774167 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 20:44:37.309694  774167 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 20:44:37.309704  774167 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 20:44:37.309715  774167 command_runner.go:130] > # which might increase security.
	I0729 20:44:37.309724  774167 command_runner.go:130] > # This option is currently deprecated,
	I0729 20:44:37.309736  774167 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 20:44:37.309807  774167 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 20:44:37.309825  774167 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 20:44:37.309835  774167 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 20:44:37.309848  774167 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 20:44:37.309860  774167 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 20:44:37.309871  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.310105  774167 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 20:44:37.310118  774167 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 20:44:37.310126  774167 command_runner.go:130] > # the cgroup blockio controller.
	I0729 20:44:37.310297  774167 command_runner.go:130] > # blockio_config_file = ""
	I0729 20:44:37.310311  774167 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 20:44:37.310317  774167 command_runner.go:130] > # blockio parameters.
	I0729 20:44:37.310528  774167 command_runner.go:130] > # blockio_reload = false
	I0729 20:44:37.310541  774167 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 20:44:37.310548  774167 command_runner.go:130] > # irqbalance daemon.
	I0729 20:44:37.310760  774167 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 20:44:37.310772  774167 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 20:44:37.310782  774167 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 20:44:37.310794  774167 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 20:44:37.311081  774167 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 20:44:37.311103  774167 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 20:44:37.311113  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.311256  774167 command_runner.go:130] > # rdt_config_file = ""
	I0729 20:44:37.311272  774167 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 20:44:37.311356  774167 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 20:44:37.311380  774167 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 20:44:37.311514  774167 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 20:44:37.311529  774167 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 20:44:37.311542  774167 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 20:44:37.311551  774167 command_runner.go:130] > # will be added.
	I0729 20:44:37.311648  774167 command_runner.go:130] > # default_capabilities = [
	I0729 20:44:37.311801  774167 command_runner.go:130] > # 	"CHOWN",
	I0729 20:44:37.312075  774167 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 20:44:37.312088  774167 command_runner.go:130] > # 	"FSETID",
	I0729 20:44:37.312097  774167 command_runner.go:130] > # 	"FOWNER",
	I0729 20:44:37.312102  774167 command_runner.go:130] > # 	"SETGID",
	I0729 20:44:37.312109  774167 command_runner.go:130] > # 	"SETUID",
	I0729 20:44:37.312115  774167 command_runner.go:130] > # 	"SETPCAP",
	I0729 20:44:37.312125  774167 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 20:44:37.312132  774167 command_runner.go:130] > # 	"KILL",
	I0729 20:44:37.312151  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312173  774167 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 20:44:37.312185  774167 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 20:44:37.312194  774167 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 20:44:37.312208  774167 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 20:44:37.312221  774167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 20:44:37.312234  774167 command_runner.go:130] > default_sysctls = [
	I0729 20:44:37.312242  774167 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 20:44:37.312250  774167 command_runner.go:130] > ]
	I0729 20:44:37.312258  774167 command_runner.go:130] > # List of devices on the host that a
	I0729 20:44:37.312283  774167 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 20:44:37.312294  774167 command_runner.go:130] > # allowed_devices = [
	I0729 20:44:37.312306  774167 command_runner.go:130] > # 	"/dev/fuse",
	I0729 20:44:37.312317  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312325  774167 command_runner.go:130] > # List of additional devices. specified as
	I0729 20:44:37.312339  774167 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 20:44:37.312351  774167 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 20:44:37.312362  774167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 20:44:37.312372  774167 command_runner.go:130] > # additional_devices = [
	I0729 20:44:37.312384  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312393  774167 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 20:44:37.312414  774167 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 20:44:37.312424  774167 command_runner.go:130] > # 	"/etc/cdi",
	I0729 20:44:37.312430  774167 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 20:44:37.312436  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312447  774167 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 20:44:37.312460  774167 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 20:44:37.312470  774167 command_runner.go:130] > # Defaults to false.
	I0729 20:44:37.312478  774167 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 20:44:37.312492  774167 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 20:44:37.312505  774167 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 20:44:37.312516  774167 command_runner.go:130] > # hooks_dir = [
	I0729 20:44:37.312524  774167 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 20:44:37.312538  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312551  774167 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 20:44:37.312565  774167 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 20:44:37.312574  774167 command_runner.go:130] > # its default mounts from the following two files:
	I0729 20:44:37.312583  774167 command_runner.go:130] > #
	I0729 20:44:37.312592  774167 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 20:44:37.312606  774167 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 20:44:37.312617  774167 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 20:44:37.312624  774167 command_runner.go:130] > #
	I0729 20:44:37.312633  774167 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 20:44:37.312644  774167 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 20:44:37.312654  774167 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 20:44:37.312665  774167 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 20:44:37.312673  774167 command_runner.go:130] > #
	I0729 20:44:37.312681  774167 command_runner.go:130] > # default_mounts_file = ""
	I0729 20:44:37.312697  774167 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 20:44:37.312712  774167 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 20:44:37.312721  774167 command_runner.go:130] > pids_limit = 1024
	I0729 20:44:37.312730  774167 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 20:44:37.312744  774167 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 20:44:37.312755  774167 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 20:44:37.312772  774167 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 20:44:37.312781  774167 command_runner.go:130] > # log_size_max = -1
	I0729 20:44:37.312793  774167 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 20:44:37.312803  774167 command_runner.go:130] > # log_to_journald = false
	I0729 20:44:37.312814  774167 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 20:44:37.312825  774167 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 20:44:37.312844  774167 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 20:44:37.312855  774167 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 20:44:37.312867  774167 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 20:44:37.312873  774167 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 20:44:37.312884  774167 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 20:44:37.312892  774167 command_runner.go:130] > # read_only = false
	I0729 20:44:37.312901  774167 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 20:44:37.312914  774167 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 20:44:37.312923  774167 command_runner.go:130] > # live configuration reload.
	I0729 20:44:37.312928  774167 command_runner.go:130] > # log_level = "info"
	I0729 20:44:37.312937  774167 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 20:44:37.312945  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.312954  774167 command_runner.go:130] > # log_filter = ""
	I0729 20:44:37.312964  774167 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 20:44:37.312975  774167 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 20:44:37.312983  774167 command_runner.go:130] > # separated by comma.
	I0729 20:44:37.312994  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313004  774167 command_runner.go:130] > # uid_mappings = ""
	I0729 20:44:37.313015  774167 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 20:44:37.313027  774167 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 20:44:37.313037  774167 command_runner.go:130] > # separated by comma.
	I0729 20:44:37.313050  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313061  774167 command_runner.go:130] > # gid_mappings = ""
	I0729 20:44:37.313073  774167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 20:44:37.313087  774167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 20:44:37.313100  774167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 20:44:37.313116  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313125  774167 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 20:44:37.313136  774167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 20:44:37.313149  774167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 20:44:37.313162  774167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 20:44:37.313177  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313186  774167 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 20:44:37.313197  774167 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 20:44:37.313210  774167 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 20:44:37.313223  774167 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 20:44:37.313240  774167 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 20:44:37.313251  774167 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 20:44:37.313266  774167 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 20:44:37.313278  774167 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 20:44:37.313285  774167 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 20:44:37.313295  774167 command_runner.go:130] > drop_infra_ctr = false
	I0729 20:44:37.313310  774167 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 20:44:37.313321  774167 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 20:44:37.313335  774167 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 20:44:37.313345  774167 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 20:44:37.313356  774167 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 20:44:37.313369  774167 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 20:44:37.313382  774167 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 20:44:37.313392  774167 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 20:44:37.313402  774167 command_runner.go:130] > # shared_cpuset = ""
	I0729 20:44:37.313413  774167 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 20:44:37.313425  774167 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 20:44:37.313435  774167 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 20:44:37.313446  774167 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 20:44:37.313456  774167 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 20:44:37.313465  774167 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 20:44:37.313479  774167 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 20:44:37.313486  774167 command_runner.go:130] > # enable_criu_support = false
	I0729 20:44:37.313498  774167 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 20:44:37.313512  774167 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 20:44:37.313522  774167 command_runner.go:130] > # enable_pod_events = false
	I0729 20:44:37.313539  774167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 20:44:37.313551  774167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 20:44:37.313563  774167 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 20:44:37.313571  774167 command_runner.go:130] > # default_runtime = "runc"
	I0729 20:44:37.313581  774167 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 20:44:37.313597  774167 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 20:44:37.313614  774167 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 20:44:37.313624  774167 command_runner.go:130] > # creation as a file is not desired either.
	I0729 20:44:37.313639  774167 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 20:44:37.313655  774167 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 20:44:37.313665  774167 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 20:44:37.313672  774167 command_runner.go:130] > # ]
	I0729 20:44:37.313680  774167 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 20:44:37.313692  774167 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 20:44:37.313704  774167 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 20:44:37.313714  774167 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 20:44:37.313718  774167 command_runner.go:130] > #
	I0729 20:44:37.313726  774167 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 20:44:37.313737  774167 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 20:44:37.313800  774167 command_runner.go:130] > # runtime_type = "oci"
	I0729 20:44:37.313811  774167 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 20:44:37.313823  774167 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 20:44:37.313830  774167 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 20:44:37.313841  774167 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 20:44:37.313850  774167 command_runner.go:130] > # monitor_env = []
	I0729 20:44:37.313860  774167 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 20:44:37.313869  774167 command_runner.go:130] > # allowed_annotations = []
	I0729 20:44:37.313877  774167 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 20:44:37.313885  774167 command_runner.go:130] > # Where:
	I0729 20:44:37.313893  774167 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 20:44:37.313908  774167 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 20:44:37.313923  774167 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 20:44:37.313934  774167 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 20:44:37.313943  774167 command_runner.go:130] > #   in $PATH.
	I0729 20:44:37.313962  774167 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 20:44:37.313974  774167 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 20:44:37.313986  774167 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 20:44:37.313994  774167 command_runner.go:130] > #   state.
	I0729 20:44:37.314004  774167 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 20:44:37.314016  774167 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 20:44:37.314026  774167 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 20:44:37.314038  774167 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 20:44:37.314051  774167 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 20:44:37.314065  774167 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 20:44:37.314077  774167 command_runner.go:130] > #   The currently recognized values are:
	I0729 20:44:37.314090  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 20:44:37.314104  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 20:44:37.314119  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 20:44:37.314132  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 20:44:37.314146  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 20:44:37.314159  774167 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 20:44:37.314172  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 20:44:37.314186  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 20:44:37.314198  774167 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 20:44:37.314210  774167 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 20:44:37.314221  774167 command_runner.go:130] > #   deprecated option "conmon".
	I0729 20:44:37.314234  774167 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 20:44:37.314245  774167 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 20:44:37.314258  774167 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 20:44:37.314270  774167 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 20:44:37.314282  774167 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 20:44:37.314293  774167 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 20:44:37.314306  774167 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 20:44:37.314317  774167 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 20:44:37.314324  774167 command_runner.go:130] > #
	I0729 20:44:37.314333  774167 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 20:44:37.314341  774167 command_runner.go:130] > #
	I0729 20:44:37.314352  774167 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 20:44:37.314364  774167 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 20:44:37.314370  774167 command_runner.go:130] > #
	I0729 20:44:37.314386  774167 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 20:44:37.314397  774167 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 20:44:37.314404  774167 command_runner.go:130] > #
	I0729 20:44:37.314414  774167 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 20:44:37.314423  774167 command_runner.go:130] > # feature.
	I0729 20:44:37.314430  774167 command_runner.go:130] > #
	I0729 20:44:37.314439  774167 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 20:44:37.314452  774167 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 20:44:37.314466  774167 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 20:44:37.314478  774167 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 20:44:37.314490  774167 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 20:44:37.314498  774167 command_runner.go:130] > #
	I0729 20:44:37.314511  774167 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 20:44:37.314527  774167 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 20:44:37.314539  774167 command_runner.go:130] > #
	I0729 20:44:37.314550  774167 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 20:44:37.314561  774167 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 20:44:37.314568  774167 command_runner.go:130] > #
	I0729 20:44:37.314579  774167 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 20:44:37.314590  774167 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 20:44:37.314598  774167 command_runner.go:130] > # limitation.
	I0729 20:44:37.314604  774167 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 20:44:37.314612  774167 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 20:44:37.314620  774167 command_runner.go:130] > runtime_type = "oci"
	I0729 20:44:37.314629  774167 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 20:44:37.314636  774167 command_runner.go:130] > runtime_config_path = ""
	I0729 20:44:37.314646  774167 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 20:44:37.314654  774167 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 20:44:37.314662  774167 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 20:44:37.314670  774167 command_runner.go:130] > monitor_env = [
	I0729 20:44:37.314678  774167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 20:44:37.314685  774167 command_runner.go:130] > ]
	I0729 20:44:37.314691  774167 command_runner.go:130] > privileged_without_host_devices = false
	I0729 20:44:37.314703  774167 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 20:44:37.314713  774167 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 20:44:37.314725  774167 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 20:44:37.314746  774167 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 20:44:37.314759  774167 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 20:44:37.314770  774167 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 20:44:37.314785  774167 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 20:44:37.314798  774167 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 20:44:37.314809  774167 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 20:44:37.314821  774167 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 20:44:37.314826  774167 command_runner.go:130] > # Example:
	I0729 20:44:37.314833  774167 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 20:44:37.314841  774167 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 20:44:37.314848  774167 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 20:44:37.314855  774167 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 20:44:37.314860  774167 command_runner.go:130] > # cpuset = 0
	I0729 20:44:37.314865  774167 command_runner.go:130] > # cpushares = "0-1"
	I0729 20:44:37.314870  774167 command_runner.go:130] > # Where:
	I0729 20:44:37.314879  774167 command_runner.go:130] > # The workload name is workload-type.
	I0729 20:44:37.314888  774167 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 20:44:37.314896  774167 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 20:44:37.314905  774167 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 20:44:37.314916  774167 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 20:44:37.314925  774167 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 20:44:37.314932  774167 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 20:44:37.314942  774167 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 20:44:37.314949  774167 command_runner.go:130] > # Default value is set to true
	I0729 20:44:37.314956  774167 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 20:44:37.314963  774167 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 20:44:37.314969  774167 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 20:44:37.314975  774167 command_runner.go:130] > # Default value is set to 'false'
	I0729 20:44:37.314981  774167 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 20:44:37.314989  774167 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 20:44:37.314993  774167 command_runner.go:130] > #
	I0729 20:44:37.315002  774167 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 20:44:37.315011  774167 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 20:44:37.315019  774167 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 20:44:37.315028  774167 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 20:44:37.315040  774167 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 20:44:37.315061  774167 command_runner.go:130] > [crio.image]
	I0729 20:44:37.315072  774167 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 20:44:37.315080  774167 command_runner.go:130] > # default_transport = "docker://"
	I0729 20:44:37.315088  774167 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 20:44:37.315099  774167 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 20:44:37.315107  774167 command_runner.go:130] > # global_auth_file = ""
	I0729 20:44:37.315116  774167 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 20:44:37.315126  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.315137  774167 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 20:44:37.315161  774167 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 20:44:37.315172  774167 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 20:44:37.315179  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.315188  774167 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 20:44:37.315196  774167 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 20:44:37.315204  774167 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 20:44:37.315217  774167 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 20:44:37.315228  774167 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 20:44:37.315238  774167 command_runner.go:130] > # pause_command = "/pause"
	I0729 20:44:37.315250  774167 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 20:44:37.315260  774167 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 20:44:37.315271  774167 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 20:44:37.315281  774167 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 20:44:37.315291  774167 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 20:44:37.315302  774167 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 20:44:37.315310  774167 command_runner.go:130] > # pinned_images = [
	I0729 20:44:37.315315  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315324  774167 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 20:44:37.315336  774167 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 20:44:37.315347  774167 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 20:44:37.315358  774167 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 20:44:37.315371  774167 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 20:44:37.315379  774167 command_runner.go:130] > # signature_policy = ""
	I0729 20:44:37.315389  774167 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 20:44:37.315402  774167 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 20:44:37.315412  774167 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 20:44:37.315423  774167 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 20:44:37.315441  774167 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 20:44:37.315451  774167 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 20:44:37.315461  774167 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 20:44:37.315473  774167 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 20:44:37.315482  774167 command_runner.go:130] > # changing them here.
	I0729 20:44:37.315488  774167 command_runner.go:130] > # insecure_registries = [
	I0729 20:44:37.315496  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315507  774167 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 20:44:37.315517  774167 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 20:44:37.315526  774167 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 20:44:37.315539  774167 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 20:44:37.315549  774167 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 20:44:37.315559  774167 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 20:44:37.315567  774167 command_runner.go:130] > # CNI plugins.
	I0729 20:44:37.315573  774167 command_runner.go:130] > [crio.network]
	I0729 20:44:37.315585  774167 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 20:44:37.315601  774167 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 20:44:37.315610  774167 command_runner.go:130] > # cni_default_network = ""
	I0729 20:44:37.315621  774167 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 20:44:37.315633  774167 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 20:44:37.315643  774167 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 20:44:37.315652  774167 command_runner.go:130] > # plugin_dirs = [
	I0729 20:44:37.315658  774167 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 20:44:37.315666  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315674  774167 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 20:44:37.315682  774167 command_runner.go:130] > [crio.metrics]
	I0729 20:44:37.315690  774167 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 20:44:37.315699  774167 command_runner.go:130] > enable_metrics = true
	I0729 20:44:37.315706  774167 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 20:44:37.315719  774167 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 20:44:37.315732  774167 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 20:44:37.315745  774167 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 20:44:37.315756  774167 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 20:44:37.315765  774167 command_runner.go:130] > # metrics_collectors = [
	I0729 20:44:37.315770  774167 command_runner.go:130] > # 	"operations",
	I0729 20:44:37.315777  774167 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 20:44:37.315782  774167 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 20:44:37.315788  774167 command_runner.go:130] > # 	"operations_errors",
	I0729 20:44:37.315792  774167 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 20:44:37.315796  774167 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 20:44:37.315801  774167 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 20:44:37.315807  774167 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 20:44:37.315811  774167 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 20:44:37.315818  774167 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 20:44:37.315822  774167 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 20:44:37.315828  774167 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 20:44:37.315832  774167 command_runner.go:130] > # 	"containers_oom_total",
	I0729 20:44:37.315838  774167 command_runner.go:130] > # 	"containers_oom",
	I0729 20:44:37.315843  774167 command_runner.go:130] > # 	"processes_defunct",
	I0729 20:44:37.315848  774167 command_runner.go:130] > # 	"operations_total",
	I0729 20:44:37.315852  774167 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 20:44:37.315864  774167 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 20:44:37.315873  774167 command_runner.go:130] > # 	"operations_errors_total",
	I0729 20:44:37.315880  774167 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 20:44:37.315890  774167 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 20:44:37.315896  774167 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 20:44:37.315905  774167 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 20:44:37.315912  774167 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 20:44:37.315922  774167 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 20:44:37.315932  774167 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 20:44:37.315943  774167 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 20:44:37.315954  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315965  774167 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 20:44:37.315974  774167 command_runner.go:130] > # metrics_port = 9090
	I0729 20:44:37.315982  774167 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 20:44:37.315990  774167 command_runner.go:130] > # metrics_socket = ""
	I0729 20:44:37.315995  774167 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 20:44:37.316001  774167 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 20:44:37.316009  774167 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 20:44:37.316014  774167 command_runner.go:130] > # certificate on any modification event.
	I0729 20:44:37.316020  774167 command_runner.go:130] > # metrics_cert = ""
	I0729 20:44:37.316025  774167 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 20:44:37.316049  774167 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 20:44:37.316059  774167 command_runner.go:130] > # metrics_key = ""
	I0729 20:44:37.316068  774167 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 20:44:37.316076  774167 command_runner.go:130] > [crio.tracing]
	I0729 20:44:37.316086  774167 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 20:44:37.316095  774167 command_runner.go:130] > # enable_tracing = false
	I0729 20:44:37.316103  774167 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 20:44:37.316111  774167 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 20:44:37.316117  774167 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 20:44:37.316124  774167 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 20:44:37.316128  774167 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 20:44:37.316132  774167 command_runner.go:130] > [crio.nri]
	I0729 20:44:37.316135  774167 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 20:44:37.316139  774167 command_runner.go:130] > # enable_nri = false
	I0729 20:44:37.316144  774167 command_runner.go:130] > # NRI socket to listen on.
	I0729 20:44:37.316148  774167 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 20:44:37.316152  774167 command_runner.go:130] > # NRI plugin directory to use.
	I0729 20:44:37.316156  774167 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 20:44:37.316161  774167 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 20:44:37.316168  774167 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 20:44:37.316174  774167 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 20:44:37.316180  774167 command_runner.go:130] > # nri_disable_connections = false
	I0729 20:44:37.316186  774167 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 20:44:37.316193  774167 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 20:44:37.316198  774167 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 20:44:37.316204  774167 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 20:44:37.316210  774167 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 20:44:37.316214  774167 command_runner.go:130] > [crio.stats]
	I0729 20:44:37.316220  774167 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 20:44:37.316227  774167 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 20:44:37.316231  774167 command_runner.go:130] > # stats_collection_period = 0
	I0729 20:44:37.316256  774167 command_runner.go:130] ! time="2024-07-29 20:44:37.267158551Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 20:44:37.316271  774167 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 20:44:37.316389  774167 cni.go:84] Creating CNI manager for ""
	I0729 20:44:37.316399  774167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 20:44:37.316409  774167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:44:37.316431  774167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-151054 NodeName:multinode-151054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:44:37.316576  774167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-151054"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:44:37.316640  774167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:44:37.326094  774167 command_runner.go:130] > kubeadm
	I0729 20:44:37.326115  774167 command_runner.go:130] > kubectl
	I0729 20:44:37.326120  774167 command_runner.go:130] > kubelet
	I0729 20:44:37.326140  774167 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:44:37.326201  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 20:44:37.334826  774167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 20:44:37.350903  774167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:44:37.366907  774167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 20:44:37.381984  774167 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0729 20:44:37.385561  774167 command_runner.go:130] > 192.168.39.229	control-plane.minikube.internal
	I0729 20:44:37.385643  774167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:44:37.523722  774167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:44:37.538224  774167 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054 for IP: 192.168.39.229
	I0729 20:44:37.538247  774167 certs.go:194] generating shared ca certs ...
	I0729 20:44:37.538270  774167 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:44:37.538466  774167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:44:37.538506  774167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:44:37.538515  774167 certs.go:256] generating profile certs ...
	I0729 20:44:37.538601  774167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/client.key
	I0729 20:44:37.538657  774167 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key.d3ff0f9a
	I0729 20:44:37.538694  774167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key
	I0729 20:44:37.538705  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:44:37.538717  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:44:37.538727  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:44:37.538737  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:44:37.538746  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:44:37.538781  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:44:37.538795  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:44:37.538804  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:44:37.538863  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:44:37.538892  774167 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:44:37.538902  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:44:37.538924  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:44:37.538951  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:44:37.538972  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:44:37.539008  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:44:37.539034  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.539048  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.539064  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.539687  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:44:37.563311  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:44:37.586579  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:44:37.608648  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:44:37.641739  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 20:44:37.692533  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:44:37.723150  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:44:37.753983  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:44:37.776664  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:44:37.797720  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:44:37.826560  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:44:37.852340  774167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:44:37.872885  774167 ssh_runner.go:195] Run: openssl version
	I0729 20:44:37.879821  774167 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 20:44:37.880177  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:44:37.899542  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.904816  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.906162  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.906228  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.917373  774167 command_runner.go:130] > 51391683
	I0729 20:44:37.917630  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:44:37.932755  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:44:37.944419  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948514  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948646  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948694  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.953766  774167 command_runner.go:130] > 3ec20f2e
	I0729 20:44:37.953980  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:44:37.964434  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:44:37.978359  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982532  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982717  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982774  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.987801  774167 command_runner.go:130] > b5213941
	I0729 20:44:37.987969  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:44:37.997078  774167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:44:38.001192  774167 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:44:38.001219  774167 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 20:44:38.001229  774167 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 20:44:38.001239  774167 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 20:44:38.001250  774167 command_runner.go:130] > Access: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001262  774167 command_runner.go:130] > Modify: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001270  774167 command_runner.go:130] > Change: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001278  774167 command_runner.go:130] >  Birth: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001330  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:44:38.006805  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.006883  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:44:38.012200  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.012284  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:44:38.017465  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.017631  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:44:38.022748  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.022808  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:44:38.027750  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.027939  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:44:38.032992  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.033065  774167 kubeadm.go:392] StartCluster: {Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:44:38.033229  774167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:44:38.033295  774167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:44:38.066384  774167 command_runner.go:130] > c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a
	I0729 20:44:38.066412  774167 command_runner.go:130] > b2898ece6d62716cb34a0d2298ea9287f4e8128003a938b04d05749163588a62
	I0729 20:44:38.066421  774167 command_runner.go:130] > 13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca
	I0729 20:44:38.066432  774167 command_runner.go:130] > ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0
	I0729 20:44:38.066442  774167 command_runner.go:130] > 8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0
	I0729 20:44:38.066449  774167 command_runner.go:130] > bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8
	I0729 20:44:38.066455  774167 command_runner.go:130] > 888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104
	I0729 20:44:38.066462  774167 command_runner.go:130] > 4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1
	I0729 20:44:38.066467  774167 command_runner.go:130] > 1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77
	I0729 20:44:38.067875  774167 cri.go:89] found id: "c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a"
	I0729 20:44:38.067900  774167 cri.go:89] found id: "b2898ece6d62716cb34a0d2298ea9287f4e8128003a938b04d05749163588a62"
	I0729 20:44:38.067910  774167 cri.go:89] found id: "13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca"
	I0729 20:44:38.067915  774167 cri.go:89] found id: "ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0"
	I0729 20:44:38.067920  774167 cri.go:89] found id: "8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0"
	I0729 20:44:38.067926  774167 cri.go:89] found id: "bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8"
	I0729 20:44:38.067931  774167 cri.go:89] found id: "888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104"
	I0729 20:44:38.067936  774167 cri.go:89] found id: "4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1"
	I0729 20:44:38.067941  774167 cri.go:89] found id: "1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77"
	I0729 20:44:38.067950  774167 cri.go:89] found id: ""
	I0729 20:44:38.068012  774167 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.547103427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722285986547081597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc31fb8d-cc15-422e-b4dc-49fab3f944c5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.547577521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae803170-eee6-453f-8494-083c7e7b9f02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.547640813Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae803170-eee6-453f-8494-083c7e7b9f02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.547971887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae803170-eee6-453f-8494-083c7e7b9f02 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.587584752Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=25dc3c2e-3731-47ed-b611-fdac8a6e80e2 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.587665901Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=25dc3c2e-3731-47ed-b611-fdac8a6e80e2 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.590612751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=68ed2e02-6432-451e-b619-b8afd80617dc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.591101298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722285986591075096,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=68ed2e02-6432-451e-b619-b8afd80617dc name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.591683856Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a80d44f7-4128-48bc-9fbc-c48cb643e1e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.591747677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a80d44f7-4128-48bc-9fbc-c48cb643e1e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.592118540Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a80d44f7-4128-48bc-9fbc-c48cb643e1e9 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.635212210Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=66da4c06-825b-4d93-8731-c9cf8f9841dc name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.635301626Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=66da4c06-825b-4d93-8731-c9cf8f9841dc name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.637195122Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=13819cfd-5606-4e54-871e-9c6795a91311 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.637629043Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722285986637608181,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=13819cfd-5606-4e54-871e-9c6795a91311 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.638144229Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2629ea2b-4043-4f66-9722-07cbbeebad2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.638306450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2629ea2b-4043-4f66-9722-07cbbeebad2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.638731289Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2629ea2b-4043-4f66-9722-07cbbeebad2a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.677924320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5e6a3413-e8d6-42e3-a50c-154e5b989bdc name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.678002055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5e6a3413-e8d6-42e3-a50c-154e5b989bdc name=/runtime.v1.RuntimeService/Version
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.678874170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79695b9a-c28c-45d7-bc06-8bcea4d1e852 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.679482501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722285986679456490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79695b9a-c28c-45d7-bc06-8bcea4d1e852 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.680003850Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14ecb8a6-d8d9-4c00-8d44-65ee24dfac9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.680057034Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14ecb8a6-d8d9-4c00-8d44-65ee24dfac9d name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:46:26 multinode-151054 crio[2857]: time="2024-07-29 20:46:26.680538662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14ecb8a6-d8d9-4c00-8d44-65ee24dfac9d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	51a8061550f8b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   7416f5fa87988       busybox-fc5497c4f-xzlcl
	ac8c5533285ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Running             coredns                   2                   63cc7e479365f       coredns-7db6d8ff4d-b5wh5
	edb574400709c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   ed02bcecb90b1       storage-provisioner
	c54494c8905f1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      About a minute ago   Running             kube-proxy                1                   5dfb8e27131a3       kube-proxy-r4c4j
	68e87aba726b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      About a minute ago   Running             kube-scheduler            1                   f2bd75299a3f6       kube-scheduler-multinode-151054
	856ef3eb93f24       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      About a minute ago   Running             kindnet-cni               1                   26c5ca4c1bc29       kindnet-w47zp
	dff53b546c4c3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      About a minute ago   Running             kube-controller-manager   1                   e2b29044c7d93       kube-controller-manager-multinode-151054
	be41b62accf28       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      About a minute ago   Running             etcd                      1                   154dc60e024b5       etcd-multinode-151054
	77c4392de50bc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      About a minute ago   Running             kube-apiserver            1                   20b521e6cfba5       kube-apiserver-multinode-151054
	c8c79ce8f8c6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      About a minute ago   Exited              coredns                   1                   63cc7e479365f       coredns-7db6d8ff4d-b5wh5
	3a276adeed80c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   7 minutes ago        Exited              busybox                   0                   fc8b196b6ecd4       busybox-fc5497c4f-xzlcl
	13a24620fc650       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago        Exited              storage-provisioner       0                   ba883697d2863       storage-provisioner
	ff4b9a92f1149       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    8 minutes ago        Exited              kindnet-cni               0                   05b772f39774e       kindnet-w47zp
	8cc1098813fc6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      8 minutes ago        Exited              kube-proxy                0                   27e1b6698aa58       kube-proxy-r4c4j
	bb8e0a4b6f646       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      8 minutes ago        Exited              kube-controller-manager   0                   27bd748552179       kube-controller-manager-multinode-151054
	888230c2bc7db       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      8 minutes ago        Exited              kube-scheduler            0                   fa2f571bbc1b9       kube-scheduler-multinode-151054
	4f6aa9c58ffc6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      8 minutes ago        Exited              kube-apiserver            0                   a7afdd5c40aa8       kube-apiserver-multinode-151054
	1e7183d60699a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      8 minutes ago        Exited              etcd                      0                   c4121b9f76afd       etcd-multinode-151054
	
	
	==> coredns [ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44919 - 19462 "HINFO IN 8958517495534879098.2881146296358567092. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010488425s
	
	
	==> coredns [c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51882 - 16352 "HINFO IN 7252056825391412349.2958399299106700554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015489499s
	
	
	==> describe nodes <==
	Name:               multinode-151054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=multinode-151054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_37_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:37:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151054
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:46:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-151054
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb998e3d9ebe4fefad43875bf7e965fa
	  System UUID:                fb998e3d-9ebe-4fef-ad43-875bf7e965fa
	  Boot ID:                    cb3d3153-48cd-4261-844f-da4501702e2e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xzlcl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m7s
	  kube-system                 coredns-7db6d8ff4d-b5wh5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m16s
	  kube-system                 etcd-multinode-151054                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m30s
	  kube-system                 kindnet-w47zp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m16s
	  kube-system                 kube-apiserver-multinode-151054             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-controller-manager-multinode-151054    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-proxy-r4c4j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-multinode-151054             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m14s  kube-proxy       
	  Normal  Starting                 99s    kube-proxy       
	  Normal  NodeAllocatableEnforced  8m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m30s  kubelet          Node multinode-151054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s  kubelet          Node multinode-151054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s  kubelet          Node multinode-151054 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m30s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m17s  node-controller  Node multinode-151054 event: Registered Node multinode-151054 in Controller
	  Normal  NodeReady                8m1s   kubelet          Node multinode-151054 status is now: NodeReady
	  Normal  Starting                 97s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s    kubelet          Node multinode-151054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s    kubelet          Node multinode-151054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s    kubelet          Node multinode-151054 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  97s    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           86s    node-controller  Node multinode-151054 event: Registered Node multinode-151054 in Controller
	
	
	Name:               multinode-151054-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151054-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=multinode-151054
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_45_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:45:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151054-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:46:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:45:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:45:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:45:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:45:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    multinode-151054-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 820ce78469ec4c72ae42f934153557b2
	  System UUID:                820ce784-69ec-4c72-ae42-f934153557b2
	  Boot ID:                    24a4d6f5-0294-4736-81c0-86585300cbca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hd28    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kindnet-n8znv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m31s
	  kube-system                 kube-proxy-k7bnr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m25s                  kube-proxy  
	  Normal  Starting                 56s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  7m31s (x2 over 7m31s)  kubelet     Node multinode-151054-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m31s (x2 over 7m31s)  kubelet     Node multinode-151054-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m31s (x2 over 7m31s)  kubelet     Node multinode-151054-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m10s                  kubelet     Node multinode-151054-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  60s (x2 over 60s)      kubelet     Node multinode-151054-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x2 over 60s)      kubelet     Node multinode-151054-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x2 over 60s)      kubelet     Node multinode-151054-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  60s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                42s                    kubelet     Node multinode-151054-m02 status is now: NodeReady
	
	
	Name:               multinode-151054-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151054-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=multinode-151054
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_46_04_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:46:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151054-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:46:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:46:23 +0000   Mon, 29 Jul 2024 20:46:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:46:23 +0000   Mon, 29 Jul 2024 20:46:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:46:23 +0000   Mon, 29 Jul 2024 20:46:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:46:23 +0000   Mon, 29 Jul 2024 20:46:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.252
	  Hostname:    multinode-151054-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 66c35766e1a148d58a4bda206755ecdd
	  System UUID:                66c35766-e1a1-48d5-8a4b-da206755ecdd
	  Boot ID:                    74d4ccc6-8a1e-4959-abf0-40cc8ad96ce6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dj5sl       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m37s
	  kube-system                 kube-proxy-bhsjj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m32s                  kube-proxy  
	  Normal  Starting                 17s                    kube-proxy  
	  Normal  Starting                 5m43s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  6m37s (x2 over 6m37s)  kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m37s (x2 over 6m37s)  kubelet     Node multinode-151054-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m37s (x2 over 6m37s)  kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m18s                  kubelet     Node multinode-151054-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m49s (x2 over 5m49s)  kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m49s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    5m49s (x2 over 5m49s)  kubelet     Node multinode-151054-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m49s (x2 over 5m49s)  kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m29s                  kubelet     Node multinode-151054-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  23s (x2 over 23s)      kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x2 over 23s)      kubelet     Node multinode-151054-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x2 over 23s)      kubelet     Node multinode-151054-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                4s                     kubelet     Node multinode-151054-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.045090] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.157263] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.133278] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.263049] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.970738] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.557363] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.068952] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999813] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.084354] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 20:38] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.096431] systemd-fstab-generator[1455]: Ignoring "noauto" option for root device
	[  +5.136064] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 20:39] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 20:44] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.136240] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.162770] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.138356] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.257825] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +1.893675] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +6.493850] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.139916] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.089232] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.512925] kauditd_printk_skb: 19 callbacks suppressed
	[Jul29 20:45] systemd-fstab-generator[3965]: Ignoring "noauto" option for root device
	[ +14.638094] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77] <==
	{"level":"info","ts":"2024-07-29T20:37:51.841475Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T20:37:51.84341Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T20:37:51.844056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"warn","ts":"2024-07-29T20:38:56.574502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.709089ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7886243852606418830 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151054-m02.17e6c991faa1ff30\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151054-m02.17e6c991faa1ff30\" value_size:640 lease:7886243852606418230 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T20:38:56.574673Z","caller":"traceutil/trace.go:171","msg":"trace[1728078085] linearizableReadLoop","detail":"{readStateIndex:470; appliedIndex:468; }","duration":"127.091709ms","start":"2024-07-29T20:38:56.447558Z","end":"2024-07-29T20:38:56.57465Z","steps":["trace[1728078085] 'read index received'  (duration: 125.382304ms)","trace[1728078085] 'applied index is now lower than readState.Index'  (duration: 1.70866ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T20:38:56.574729Z","caller":"traceutil/trace.go:171","msg":"trace[1297579120] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"157.708956ms","start":"2024-07-29T20:38:56.417015Z","end":"2024-07-29T20:38:56.574724Z","steps":["trace[1297579120] 'process raft request'  (duration: 157.593007ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:38:56.574777Z","caller":"traceutil/trace.go:171","msg":"trace[1842555160] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"224.621858ms","start":"2024-07-29T20:38:56.350141Z","end":"2024-07-29T20:38:56.574763Z","steps":["trace[1842555160] 'process raft request'  (duration: 24.220271ms)","trace[1842555160] 'compare'  (duration: 199.628265ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T20:38:56.574894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.338735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151054-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-29T20:38:56.574917Z","caller":"traceutil/trace.go:171","msg":"trace[1583237489] range","detail":"{range_begin:/registry/minions/multinode-151054-m02; range_end:; response_count:1; response_revision:449; }","duration":"127.389323ms","start":"2024-07-29T20:38:56.447518Z","end":"2024-07-29T20:38:56.574907Z","steps":["trace[1583237489] 'agreement among raft nodes before linearized reading'  (duration: 127.325583ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T20:39:50.300985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.229421ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7886243852606419280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151054-m03.17e6c99e7be7f2e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151054-m03.17e6c99e7be7f2e1\" value_size:642 lease:7886243852606418839 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T20:39:50.301288Z","caller":"traceutil/trace.go:171","msg":"trace[1155449345] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"173.334101ms","start":"2024-07-29T20:39:50.12793Z","end":"2024-07-29T20:39:50.301264Z","steps":["trace[1155449345] 'read index received'  (duration: 21.729406ms)","trace[1155449345] 'applied index is now lower than readState.Index'  (duration: 151.60393ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T20:39:50.301447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.507247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151054-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T20:39:50.301494Z","caller":"traceutil/trace.go:171","msg":"trace[213535604] range","detail":"{range_begin:/registry/minions/multinode-151054-m03; range_end:; response_count:1; response_revision:585; }","duration":"173.587629ms","start":"2024-07-29T20:39:50.1279Z","end":"2024-07-29T20:39:50.301488Z","steps":["trace[213535604] 'agreement among raft nodes before linearized reading'  (duration: 173.43879ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:39:50.301532Z","caller":"traceutil/trace.go:171","msg":"trace[176953486] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"180.716704ms","start":"2024-07-29T20:39:50.120809Z","end":"2024-07-29T20:39:50.301525Z","steps":["trace[176953486] 'process raft request'  (duration: 180.407782ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:39:50.301483Z","caller":"traceutil/trace.go:171","msg":"trace[636608988] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"239.990741ms","start":"2024-07-29T20:39:50.061473Z","end":"2024-07-29T20:39:50.301464Z","steps":["trace[636608988] 'process raft request'  (duration: 88.155777ms)","trace[636608988] 'compare'  (duration: 151.115126ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T20:43:03.674961Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T20:43:03.675069Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-151054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"]}
	{"level":"warn","ts":"2024-07-29T20:43:03.675182Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.675276Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.709753Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.229:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.709845Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.229:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T20:43:03.709938Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8647f2870156d71","current-leader-member-id":"b8647f2870156d71"}
	{"level":"info","ts":"2024-07-29T20:43:03.713281Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:43:03.71348Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:43:03.713526Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-151054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"]}
	
	
	==> etcd [be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983] <==
	{"level":"info","ts":"2024-07-29T20:44:44.878291Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T20:44:44.8783Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T20:44:44.878808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=(13286884612305677681)"}
	{"level":"info","ts":"2024-07-29T20:44:44.878915Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2024-07-29T20:44:44.890141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:44:44.89341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:44:44.924571Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T20:44:44.931645Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T20:44:44.934461Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T20:44:44.928439Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:44:44.938465Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:44:45.976456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.979452Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:multinode-151054 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T20:44:45.979535Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:44:45.979878Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T20:44:45.979915Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T20:44:45.980079Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:44:45.982246Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2024-07-29T20:44:45.9832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:46:27 up 9 min,  0 users,  load average: 0.56, 0.41, 0.19
	Linux multinode-151054 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72] <==
	I0729 20:45:45.109543       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:45:55.112535       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:45:55.112581       1 main.go:299] handling current node
	I0729 20:45:55.112597       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:45:55.112631       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:45:55.112779       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:45:55.112800       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:46:05.108535       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:46:05.108581       1 main.go:299] handling current node
	I0729 20:46:05.108599       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:46:05.108605       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:46:05.108770       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:46:05.108799       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.2.0/24] 
	I0729 20:46:15.108440       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:46:15.108489       1 main.go:299] handling current node
	I0729 20:46:15.108505       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:46:15.108510       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:46:15.108656       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:46:15.108680       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.2.0/24] 
	I0729 20:46:25.109033       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:46:25.109187       1 main.go:299] handling current node
	I0729 20:46:25.109237       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:46:25.109257       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:46:25.109477       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:46:25.109514       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0] <==
	I0729 20:42:15.367541       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:25.367125       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:25.367267       1 main.go:299] handling current node
	I0729 20:42:25.367403       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:25.367442       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:25.367595       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:25.367621       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:35.375290       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:35.375344       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:35.375607       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:35.375654       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:35.375758       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:35.375785       1 main.go:299] handling current node
	I0729 20:42:45.367449       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:45.367489       1 main.go:299] handling current node
	I0729 20:42:45.367508       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:45.367514       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:45.367664       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:45.367682       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:55.375096       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:55.375301       1 main.go:299] handling current node
	I0729 20:42:55.375355       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:55.375468       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:55.375670       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:55.375707       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1] <==
	W0729 20:43:03.705041       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705088       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705136       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705190       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705242       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705290       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705338       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705477       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705560       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705592       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705625       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706289       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706355       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706484       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706541       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706584       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706635       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706683       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706731       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706780       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706871       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706950       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707093       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707143       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707181       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff] <==
	I0729 20:44:47.239724       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:44:47.239763       1 policy_source.go:224] refreshing policies
	I0729 20:44:47.257518       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 20:44:47.257630       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 20:44:47.258701       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 20:44:47.260027       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 20:44:47.260081       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 20:44:47.260546       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 20:44:47.258706       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 20:44:47.266945       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 20:44:47.268250       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 20:44:47.268440       1 aggregator.go:165] initial CRD sync complete...
	I0729 20:44:47.268549       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 20:44:47.268640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 20:44:47.268665       1 cache.go:39] Caches are synced for autoregister controller
	E0729 20:44:47.277912       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 20:44:47.323017       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 20:44:48.161137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 20:44:49.851235       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 20:44:49.967118       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 20:44:49.979045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 20:44:50.045469       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 20:44:50.052989       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 20:45:00.537769       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 20:45:00.637072       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8] <==
	I0729 20:38:56.614343       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m02" podCIDRs=["10.244.1.0/24"]
	I0729 20:38:59.125669       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151054-m02"
	I0729 20:39:17.015873       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:39:19.164307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.765076ms"
	I0729 20:39:19.181803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.427586ms"
	I0729 20:39:19.181970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.376µs"
	I0729 20:39:19.182093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.526µs"
	I0729 20:39:19.184273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.285µs"
	I0729 20:39:22.764815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.933623ms"
	I0729 20:39:22.765278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.418µs"
	I0729 20:39:22.807838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.888226ms"
	I0729 20:39:22.808045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.79µs"
	I0729 20:39:50.305267       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:39:50.307465       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:39:50.340955       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:39:54.156804       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151054-m03"
	I0729 20:40:09.735448       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:37.854567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:38.917537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:38.918519       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:40:38.936837       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.3.0/24"]
	I0729 20:40:58.217720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:41:39.211790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m03"
	I0729 20:41:39.234455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.318587ms"
	I0729 20:41:39.234584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.688µs"
	
	
	==> kube-controller-manager [dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15] <==
	I0729 20:45:00.981745       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 20:45:01.033459       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 20:45:01.033559       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 20:45:22.991333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.453429ms"
	I0729 20:45:22.991496       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.395µs"
	I0729 20:45:23.000145       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.791865ms"
	I0729 20:45:23.000224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.344µs"
	I0729 20:45:26.161669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.008µs"
	I0729 20:45:27.294559       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m02\" does not exist"
	I0729 20:45:27.307748       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m02" podCIDRs=["10.244.1.0/24"]
	I0729 20:45:28.240793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.991µs"
	I0729 20:45:28.252613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.56µs"
	I0729 20:45:28.255827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.453µs"
	I0729 20:45:28.263452       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.139µs"
	I0729 20:45:28.266938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.196µs"
	I0729 20:45:45.641132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:45:45.662715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.06µs"
	I0729 20:45:45.676069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.886µs"
	I0729 20:45:49.499871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.698808ms"
	I0729 20:45:49.500334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.357µs"
	I0729 20:46:03.521364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:46:04.531306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:46:04.531598       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:46:04.544939       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:46:23.820430       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	
	
	==> kube-proxy [8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0] <==
	I0729 20:38:12.205950       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:38:12.217057       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	I0729 20:38:12.248523       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:38:12.248573       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:38:12.248589       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:38:12.250875       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:38:12.251085       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:38:12.251109       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:38:12.252220       1 config.go:192] "Starting service config controller"
	I0729 20:38:12.252251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:38:12.252316       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:38:12.252321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:38:12.252904       1 config.go:319] "Starting node config controller"
	I0729 20:38:12.252926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:38:12.353082       1 shared_informer.go:320] Caches are synced for node config
	I0729 20:38:12.353126       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:38:12.353165       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255] <==
	I0729 20:44:45.656551       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:44:47.275205       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	I0729 20:44:47.346503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:44:47.346605       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:44:47.346635       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:44:47.348885       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:44:47.349087       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:44:47.349117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:44:47.350715       1 config.go:192] "Starting service config controller"
	I0729 20:44:47.350772       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:44:47.350816       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:44:47.350832       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:44:47.352263       1 config.go:319] "Starting node config controller"
	I0729 20:44:47.353024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:44:47.451053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:44:47.451114       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:44:47.453279       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b] <==
	I0729 20:44:45.431738       1 serving.go:380] Generated self-signed cert in-memory
	W0729 20:44:47.255205       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 20:44:47.255280       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:44:47.255290       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 20:44:47.255296       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 20:44:47.273187       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 20:44:47.273267       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:44:47.277109       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 20:44:47.277139       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 20:44:47.280602       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 20:44:47.280669       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 20:44:47.378630       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104] <==
	E0729 20:37:53.926225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 20:37:53.925809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:37:53.926277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:37:53.925848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:37:53.926301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:37:53.926085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 20:37:53.926313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 20:37:53.925131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:37:53.926354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:37:53.926567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 20:37:53.926599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 20:37:54.794255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 20:37:54.794297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 20:37:54.900722       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:37:54.900963       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:37:55.061253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:37:55.061294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:37:55.131159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:37:55.131369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 20:37:55.154237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:37:55.154640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 20:37:55.162041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:37:55.162113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 20:37:56.718893       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 20:43:03.679293       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 20:44:49 multinode-151054 kubelet[3798]: I0729 20:44:49.552003    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa9ada34162e7d8ab0371909d6b8ded7-k8s-certs\") pod \"kube-controller-manager-multinode-151054\" (UID: \"fa9ada34162e7d8ab0371909d6b8ded7\") " pod="kube-system/kube-controller-manager-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.292843    3798 apiserver.go:52] "Watching apiserver"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.295595    3798 topology_manager.go:215] "Topology Admit Handler" podUID="96100a20-c36f-43ca-bfd9-973f4081239d" podNamespace="kube-system" podName="kube-proxy-r4c4j"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.295760    3798 topology_manager.go:215] "Topology Admit Handler" podUID="0b52b4dd-9625-4ec7-8baf-c41eb5e7c601" podNamespace="kube-system" podName="storage-provisioner"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.295814    3798 topology_manager.go:215] "Topology Admit Handler" podUID="b703a9ed-bb2b-4659-a7b3-90b0a410816c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-b5wh5"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.295858    3798 topology_manager.go:215] "Topology Admit Handler" podUID="3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8" podNamespace="kube-system" podName="kindnet-w47zp"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.295916    3798 topology_manager.go:215] "Topology Admit Handler" podUID="a183ecda-22ea-4803-8cf4-44a508504fcd" podNamespace="default" podName="busybox-fc5497c4f-xzlcl"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.339280    3798 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358429    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8-xtables-lock\") pod \"kindnet-w47zp\" (UID: \"3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8\") " pod="kube-system/kindnet-w47zp"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358658    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/96100a20-c36f-43ca-bfd9-973f4081239d-xtables-lock\") pod \"kube-proxy-r4c4j\" (UID: \"96100a20-c36f-43ca-bfd9-973f4081239d\") " pod="kube-system/kube-proxy-r4c4j"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358726    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0b52b4dd-9625-4ec7-8baf-c41eb5e7c601-tmp\") pod \"storage-provisioner\" (UID: \"0b52b4dd-9625-4ec7-8baf-c41eb5e7c601\") " pod="kube-system/storage-provisioner"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358803    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/96100a20-c36f-43ca-bfd9-973f4081239d-lib-modules\") pod \"kube-proxy-r4c4j\" (UID: \"96100a20-c36f-43ca-bfd9-973f4081239d\") " pod="kube-system/kube-proxy-r4c4j"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358866    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8-cni-cfg\") pod \"kindnet-w47zp\" (UID: \"3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8\") " pod="kube-system/kindnet-w47zp"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.358918    3798 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8-lib-modules\") pod \"kindnet-w47zp\" (UID: \"3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8\") " pod="kube-system/kindnet-w47zp"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.551106    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-multinode-151054\" already exists" pod="kube-system/kube-scheduler-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.554727    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-151054\" already exists" pod="kube-system/kube-apiserver-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.555316    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-151054\" already exists" pod="kube-system/kube-controller-manager-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.556160    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-151054\" already exists" pod="kube-system/etcd-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.597247    3798 scope.go:117] "RemoveContainer" containerID="c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a"
	Jul 29 20:44:53 multinode-151054 kubelet[3798]: I0729 20:44:53.333122    3798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 20:45:49 multinode-151054 kubelet[3798]: E0729 20:45:49.458012    3798 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 20:46:26.291374  775317 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19344-733808/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-151054 -n multinode-151054
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-151054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (327.25s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 stop
E0729 20:48:14.090639  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151054 stop: exit status 82 (2m0.470900511s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-151054-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-151054 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151054 status: exit status 3 (18.87823783s)

                                                
                                                
-- stdout --
	multinode-151054
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151054-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 20:48:49.508463  775982 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host
	E0729 20:48:49.508504  775982 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.98:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-151054 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-151054 -n multinode-151054
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-151054 logs -n 25: (1.378675395s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054:/home/docker/cp-test_multinode-151054-m02_multinode-151054.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054 sudo cat                                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m02_multinode-151054.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03:/home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054-m03 sudo cat                                   | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp testdata/cp-test.txt                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054:/home/docker/cp-test_multinode-151054-m03_multinode-151054.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054 sudo cat                                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02:/home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054-m02 sudo cat                                   | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-151054 node stop m03                                                          | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	| node    | multinode-151054 node start                                                             | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| stop    | -p multinode-151054                                                                     | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| start   | -p multinode-151054                                                                     | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:43 UTC | 29 Jul 24 20:46 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC |                     |
	| node    | multinode-151054 node delete                                                            | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC | 29 Jul 24 20:46 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-151054 stop                                                                   | multinode-151054 | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:43:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:43:02.610225  774167 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:43:02.610356  774167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:43:02.610364  774167 out.go:304] Setting ErrFile to fd 2...
	I0729 20:43:02.610369  774167 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:43:02.610564  774167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:43:02.611117  774167 out.go:298] Setting JSON to false
	I0729 20:43:02.612200  774167 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":15930,"bootTime":1722269853,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:43:02.612261  774167 start.go:139] virtualization: kvm guest
	I0729 20:43:02.619537  774167 out.go:177] * [multinode-151054] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:43:02.623637  774167 notify.go:220] Checking for updates...
	I0729 20:43:02.623658  774167 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:43:02.625559  774167 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:43:02.627166  774167 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:43:02.628706  774167 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:43:02.630067  774167 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:43:02.631355  774167 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:43:02.633182  774167 config.go:182] Loaded profile config "multinode-151054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:43:02.633305  774167 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:43:02.633921  774167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:43:02.633971  774167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:43:02.650780  774167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37449
	I0729 20:43:02.651289  774167 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:43:02.651882  774167 main.go:141] libmachine: Using API Version  1
	I0729 20:43:02.651905  774167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:43:02.652266  774167 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:43:02.652601  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.687511  774167 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:43:02.688747  774167 start.go:297] selected driver: kvm2
	I0729 20:43:02.688761  774167 start.go:901] validating driver "kvm2" against &{Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:43:02.688900  774167 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:43:02.689212  774167 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:43:02.689283  774167 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:43:02.704645  774167 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:43:02.705387  774167 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:43:02.705447  774167 cni.go:84] Creating CNI manager for ""
	I0729 20:43:02.705459  774167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 20:43:02.705536  774167 start.go:340] cluster config:
	{Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false k
ong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:43:02.705680  774167 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:43:02.707371  774167 out.go:177] * Starting "multinode-151054" primary control-plane node in "multinode-151054" cluster
	I0729 20:43:02.708457  774167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:43:02.708504  774167 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 20:43:02.708520  774167 cache.go:56] Caching tarball of preloaded images
	I0729 20:43:02.708638  774167 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:43:02.708649  774167 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 20:43:02.708764  774167 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/config.json ...
	I0729 20:43:02.708951  774167 start.go:360] acquireMachinesLock for multinode-151054: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:43:02.708993  774167 start.go:364] duration metric: took 24.488µs to acquireMachinesLock for "multinode-151054"
	I0729 20:43:02.709007  774167 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:43:02.709018  774167 fix.go:54] fixHost starting: 
	I0729 20:43:02.709280  774167 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:43:02.709314  774167 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:43:02.723593  774167 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0729 20:43:02.724077  774167 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:43:02.724538  774167 main.go:141] libmachine: Using API Version  1
	I0729 20:43:02.724561  774167 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:43:02.724916  774167 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:43:02.725074  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.725205  774167 main.go:141] libmachine: (multinode-151054) Calling .GetState
	I0729 20:43:02.726862  774167 fix.go:112] recreateIfNeeded on multinode-151054: state=Running err=<nil>
	W0729 20:43:02.726885  774167 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:43:02.729475  774167 out.go:177] * Updating the running kvm2 "multinode-151054" VM ...
	I0729 20:43:02.730898  774167 machine.go:94] provisionDockerMachine start ...
	I0729 20:43:02.730925  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:43:02.731154  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.733874  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.734442  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.734474  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.734643  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.734836  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.735035  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.735203  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.735399  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.735610  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.735632  774167 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:43:02.840723  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151054
	
	I0729 20:43:02.840753  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:02.841014  774167 buildroot.go:166] provisioning hostname "multinode-151054"
	I0729 20:43:02.841048  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:02.841262  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.844377  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.844853  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.844876  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.844996  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.845192  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.845352  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.845497  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.845713  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.845930  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.845944  774167 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-151054 && echo "multinode-151054" | sudo tee /etc/hostname
	I0729 20:43:02.962902  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-151054
	
	I0729 20:43:02.962949  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:02.965916  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.966262  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:02.966292  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:02.966439  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:02.966657  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.966832  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:02.966971  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:02.967202  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:02.967394  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:02.967410  774167 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-151054' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-151054/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-151054' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:43:03.072562  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:43:03.072599  774167 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:43:03.072624  774167 buildroot.go:174] setting up certificates
	I0729 20:43:03.072636  774167 provision.go:84] configureAuth start
	I0729 20:43:03.072646  774167 main.go:141] libmachine: (multinode-151054) Calling .GetMachineName
	I0729 20:43:03.072990  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:43:03.075839  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.076290  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.076311  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.076453  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.078711  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.079005  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.079037  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.079153  774167 provision.go:143] copyHostCerts
	I0729 20:43:03.079178  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:43:03.079221  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:43:03.079230  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:43:03.079295  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:43:03.079387  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:43:03.079404  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:43:03.079410  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:43:03.079437  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:43:03.079512  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:43:03.079537  774167 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:43:03.079551  774167 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:43:03.079592  774167 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:43:03.079664  774167 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.multinode-151054 san=[127.0.0.1 192.168.39.229 localhost minikube multinode-151054]
	I0729 20:43:03.381435  774167 provision.go:177] copyRemoteCerts
	I0729 20:43:03.381521  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:43:03.381547  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.384305  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.384740  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.384769  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.384961  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:03.385190  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.385385  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:03.385547  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:43:03.470639  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0729 20:43:03.470714  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:43:03.498536  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0729 20:43:03.498628  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0729 20:43:03.528214  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0729 20:43:03.528281  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 20:43:03.553922  774167 provision.go:87] duration metric: took 481.273908ms to configureAuth
	I0729 20:43:03.553955  774167 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:43:03.554229  774167 config.go:182] Loaded profile config "multinode-151054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:43:03.554324  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:43:03.557296  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.557872  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:43:03.557902  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:43:03.558132  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:43:03.558353  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.558606  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:43:03.558777  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:43:03.558947  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:43:03.559127  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:43:03.559141  774167 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:44:34.196887  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:44:34.196929  774167 machine.go:97] duration metric: took 1m31.466012352s to provisionDockerMachine
	I0729 20:44:34.196953  774167 start.go:293] postStartSetup for "multinode-151054" (driver="kvm2")
	I0729 20:44:34.196970  774167 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:44:34.197004  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.197386  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:44:34.197420  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.200885  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.201467  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.201499  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.201671  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.201863  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.202027  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.202147  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.286838  774167 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:44:34.290584  774167 command_runner.go:130] > NAME=Buildroot
	I0729 20:44:34.290600  774167 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0729 20:44:34.290604  774167 command_runner.go:130] > ID=buildroot
	I0729 20:44:34.290608  774167 command_runner.go:130] > VERSION_ID=2023.02.9
	I0729 20:44:34.290613  774167 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0729 20:44:34.290765  774167 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:44:34.290795  774167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:44:34.290851  774167 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:44:34.290934  774167 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:44:34.290947  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /etc/ssl/certs/7409622.pem
	I0729 20:44:34.291037  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:44:34.299813  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:44:34.323012  774167 start.go:296] duration metric: took 126.042002ms for postStartSetup
	I0729 20:44:34.323057  774167 fix.go:56] duration metric: took 1m31.614040115s for fixHost
	I0729 20:44:34.323089  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.326334  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.326802  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.326835  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.326984  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.327163  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.327321  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.327482  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.327660  774167 main.go:141] libmachine: Using SSH client type: native
	I0729 20:44:34.327890  774167 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.229 22 <nil> <nil>}
	I0729 20:44:34.327908  774167 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:44:34.432801  774167 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722285874.405004197
	
	I0729 20:44:34.432837  774167 fix.go:216] guest clock: 1722285874.405004197
	I0729 20:44:34.432847  774167 fix.go:229] Guest: 2024-07-29 20:44:34.405004197 +0000 UTC Remote: 2024-07-29 20:44:34.323067196 +0000 UTC m=+91.749714022 (delta=81.937001ms)
	I0729 20:44:34.432894  774167 fix.go:200] guest clock delta is within tolerance: 81.937001ms
	I0729 20:44:34.432903  774167 start.go:83] releasing machines lock for "multinode-151054", held for 1m31.723900503s
	I0729 20:44:34.432928  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.433187  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:44:34.435972  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.436486  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.436524  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.436710  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437295  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437511  774167 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:44:34.437611  774167 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:44:34.437711  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.437737  774167 ssh_runner.go:195] Run: cat /version.json
	I0729 20:44:34.437757  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:44:34.440262  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440527  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440657  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.440684  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.440846  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.440991  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:34.440997  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.441016  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:34.441190  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:44:34.441198  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.441401  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:44:34.441399  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.441551  774167 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:44:34.441672  774167 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:44:34.517028  774167 command_runner.go:130] > {"iso_version": "v1.33.1-1721690939-19319", "kicbase_version": "v0.0.44-1721687125-19319", "minikube_version": "v1.33.1", "commit": "92810d69359a527ae6920427bb5751eaaa3842e4"}
	I0729 20:44:34.517297  774167 ssh_runner.go:195] Run: systemctl --version
	I0729 20:44:34.556555  774167 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0729 20:44:34.557204  774167 command_runner.go:130] > systemd 252 (252)
	I0729 20:44:34.557253  774167 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0729 20:44:34.557328  774167 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:44:34.709833  774167 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 20:44:34.719820  774167 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0729 20:44:34.719905  774167 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:44:34.719971  774167 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:44:34.729126  774167 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 20:44:34.729150  774167 start.go:495] detecting cgroup driver to use...
	I0729 20:44:34.729215  774167 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:44:34.744636  774167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:44:34.758610  774167 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:44:34.758670  774167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:44:34.771781  774167 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:44:34.785262  774167 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:44:34.934887  774167 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:44:35.070496  774167 docker.go:232] disabling docker service ...
	I0729 20:44:35.070565  774167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:44:35.086493  774167 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:44:35.099060  774167 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:44:35.233546  774167 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:44:35.367361  774167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:44:35.380659  774167 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:44:35.397592  774167 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0729 20:44:35.398219  774167 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 20:44:35.398318  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.408682  774167 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:44:35.408753  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.419315  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.429424  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.439428  774167 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:44:35.449458  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.459285  774167 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.469564  774167 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:44:35.479874  774167 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:44:35.488711  774167 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0729 20:44:35.488893  774167 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:44:35.497577  774167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:44:35.633345  774167 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:44:37.067497  774167 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.434111954s)
	I0729 20:44:37.067526  774167 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:44:37.067588  774167 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:44:37.072154  774167 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0729 20:44:37.072178  774167 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0729 20:44:37.072184  774167 command_runner.go:130] > Device: 0,22	Inode: 1338        Links: 1
	I0729 20:44:37.072191  774167 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 20:44:37.072199  774167 command_runner.go:130] > Access: 2024-07-29 20:44:37.008042760 +0000
	I0729 20:44:37.072207  774167 command_runner.go:130] > Modify: 2024-07-29 20:44:36.931040525 +0000
	I0729 20:44:37.072216  774167 command_runner.go:130] > Change: 2024-07-29 20:44:36.931040525 +0000
	I0729 20:44:37.072239  774167 command_runner.go:130] >  Birth: -
	I0729 20:44:37.072261  774167 start.go:563] Will wait 60s for crictl version
	I0729 20:44:37.072319  774167 ssh_runner.go:195] Run: which crictl
	I0729 20:44:37.075966  774167 command_runner.go:130] > /usr/bin/crictl
	I0729 20:44:37.076075  774167 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:44:37.109888  774167 command_runner.go:130] > Version:  0.1.0
	I0729 20:44:37.109916  774167 command_runner.go:130] > RuntimeName:  cri-o
	I0729 20:44:37.109921  774167 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0729 20:44:37.109927  774167 command_runner.go:130] > RuntimeApiVersion:  v1
	I0729 20:44:37.110875  774167 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:44:37.110940  774167 ssh_runner.go:195] Run: crio --version
	I0729 20:44:37.138372  774167 command_runner.go:130] > crio version 1.29.1
	I0729 20:44:37.138398  774167 command_runner.go:130] > Version:        1.29.1
	I0729 20:44:37.138406  774167 command_runner.go:130] > GitCommit:      unknown
	I0729 20:44:37.138412  774167 command_runner.go:130] > GitCommitDate:  unknown
	I0729 20:44:37.138417  774167 command_runner.go:130] > GitTreeState:   clean
	I0729 20:44:37.138424  774167 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 20:44:37.138430  774167 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 20:44:37.138436  774167 command_runner.go:130] > Compiler:       gc
	I0729 20:44:37.138446  774167 command_runner.go:130] > Platform:       linux/amd64
	I0729 20:44:37.138452  774167 command_runner.go:130] > Linkmode:       dynamic
	I0729 20:44:37.138458  774167 command_runner.go:130] > BuildTags:      
	I0729 20:44:37.138465  774167 command_runner.go:130] >   containers_image_ostree_stub
	I0729 20:44:37.138469  774167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 20:44:37.138475  774167 command_runner.go:130] >   btrfs_noversion
	I0729 20:44:37.138480  774167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 20:44:37.138487  774167 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 20:44:37.138490  774167 command_runner.go:130] >   seccomp
	I0729 20:44:37.138494  774167 command_runner.go:130] > LDFlags:          unknown
	I0729 20:44:37.138500  774167 command_runner.go:130] > SeccompEnabled:   true
	I0729 20:44:37.138505  774167 command_runner.go:130] > AppArmorEnabled:  false
	I0729 20:44:37.138591  774167 ssh_runner.go:195] Run: crio --version
	I0729 20:44:37.164573  774167 command_runner.go:130] > crio version 1.29.1
	I0729 20:44:37.164596  774167 command_runner.go:130] > Version:        1.29.1
	I0729 20:44:37.164603  774167 command_runner.go:130] > GitCommit:      unknown
	I0729 20:44:37.164607  774167 command_runner.go:130] > GitCommitDate:  unknown
	I0729 20:44:37.164611  774167 command_runner.go:130] > GitTreeState:   clean
	I0729 20:44:37.164619  774167 command_runner.go:130] > BuildDate:      2024-07-23T05:10:02Z
	I0729 20:44:37.164626  774167 command_runner.go:130] > GoVersion:      go1.21.6
	I0729 20:44:37.164633  774167 command_runner.go:130] > Compiler:       gc
	I0729 20:44:37.164642  774167 command_runner.go:130] > Platform:       linux/amd64
	I0729 20:44:37.164648  774167 command_runner.go:130] > Linkmode:       dynamic
	I0729 20:44:37.164653  774167 command_runner.go:130] > BuildTags:      
	I0729 20:44:37.164658  774167 command_runner.go:130] >   containers_image_ostree_stub
	I0729 20:44:37.164671  774167 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0729 20:44:37.164678  774167 command_runner.go:130] >   btrfs_noversion
	I0729 20:44:37.164682  774167 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0729 20:44:37.164689  774167 command_runner.go:130] >   libdm_no_deferred_remove
	I0729 20:44:37.164693  774167 command_runner.go:130] >   seccomp
	I0729 20:44:37.164699  774167 command_runner.go:130] > LDFlags:          unknown
	I0729 20:44:37.164704  774167 command_runner.go:130] > SeccompEnabled:   true
	I0729 20:44:37.164714  774167 command_runner.go:130] > AppArmorEnabled:  false
	I0729 20:44:37.167817  774167 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 20:44:37.169152  774167 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:44:37.171904  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:37.172282  774167 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:44:37.172307  774167 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:44:37.172486  774167 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:44:37.176504  774167 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0729 20:44:37.176692  774167 kubeadm.go:883] updating cluster {Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:44:37.176827  774167 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 20:44:37.176886  774167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:44:37.226364  774167 command_runner.go:130] > {
	I0729 20:44:37.226391  774167 command_runner.go:130] >   "images": [
	I0729 20:44:37.226395  774167 command_runner.go:130] >     {
	I0729 20:44:37.226404  774167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 20:44:37.226409  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226416  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 20:44:37.226422  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226426  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226439  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 20:44:37.226451  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 20:44:37.226457  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226468  774167 command_runner.go:130] >       "size": "87165492",
	I0729 20:44:37.226475  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226482  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226495  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226499  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226503  774167 command_runner.go:130] >     },
	I0729 20:44:37.226507  774167 command_runner.go:130] >     {
	I0729 20:44:37.226513  774167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 20:44:37.226522  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226532  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 20:44:37.226541  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226548  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226562  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 20:44:37.226581  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 20:44:37.226589  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226593  774167 command_runner.go:130] >       "size": "87174707",
	I0729 20:44:37.226600  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226614  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226626  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226635  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226641  774167 command_runner.go:130] >     },
	I0729 20:44:37.226649  774167 command_runner.go:130] >     {
	I0729 20:44:37.226659  774167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 20:44:37.226669  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226678  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 20:44:37.226684  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226691  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226706  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 20:44:37.226720  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 20:44:37.226729  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226736  774167 command_runner.go:130] >       "size": "1363676",
	I0729 20:44:37.226744  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.226754  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.226761  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.226769  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.226775  774167 command_runner.go:130] >     },
	I0729 20:44:37.226783  774167 command_runner.go:130] >     {
	I0729 20:44:37.226796  774167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 20:44:37.226805  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.226815  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 20:44:37.226824  774167 command_runner.go:130] >       ],
	I0729 20:44:37.226833  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.226945  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 20:44:37.226989  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 20:44:37.227000  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227081  774167 command_runner.go:130] >       "size": "31470524",
	I0729 20:44:37.227105  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227118  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227127  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227136  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227145  774167 command_runner.go:130] >     },
	I0729 20:44:37.227153  774167 command_runner.go:130] >     {
	I0729 20:44:37.227164  774167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 20:44:37.227173  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227187  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 20:44:37.227197  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227204  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227218  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 20:44:37.227233  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 20:44:37.227241  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227249  774167 command_runner.go:130] >       "size": "61245718",
	I0729 20:44:37.227254  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227260  774167 command_runner.go:130] >       "username": "nonroot",
	I0729 20:44:37.227269  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227279  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227285  774167 command_runner.go:130] >     },
	I0729 20:44:37.227294  774167 command_runner.go:130] >     {
	I0729 20:44:37.227326  774167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 20:44:37.227334  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227342  774167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 20:44:37.227349  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227363  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227378  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 20:44:37.227408  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 20:44:37.227416  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227425  774167 command_runner.go:130] >       "size": "150779692",
	I0729 20:44:37.227430  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227440  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227449  774167 command_runner.go:130] >       },
	I0729 20:44:37.227458  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227465  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227474  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227483  774167 command_runner.go:130] >     },
	I0729 20:44:37.227491  774167 command_runner.go:130] >     {
	I0729 20:44:37.227503  774167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 20:44:37.227511  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227518  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 20:44:37.227526  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227536  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227552  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 20:44:37.227564  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 20:44:37.227569  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227575  774167 command_runner.go:130] >       "size": "117609954",
	I0729 20:44:37.227581  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227587  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227593  774167 command_runner.go:130] >       },
	I0729 20:44:37.227601  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227607  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227614  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227623  774167 command_runner.go:130] >     },
	I0729 20:44:37.227630  774167 command_runner.go:130] >     {
	I0729 20:44:37.227643  774167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 20:44:37.227650  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227662  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 20:44:37.227671  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227679  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227706  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 20:44:37.227725  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 20:44:37.227733  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227742  774167 command_runner.go:130] >       "size": "112198984",
	I0729 20:44:37.227750  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.227756  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.227761  774167 command_runner.go:130] >       },
	I0729 20:44:37.227766  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227771  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227776  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227782  774167 command_runner.go:130] >     },
	I0729 20:44:37.227788  774167 command_runner.go:130] >     {
	I0729 20:44:37.227798  774167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 20:44:37.227805  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227815  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 20:44:37.227821  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227828  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227840  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 20:44:37.227852  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 20:44:37.227859  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227867  774167 command_runner.go:130] >       "size": "85953945",
	I0729 20:44:37.227874  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.227880  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.227888  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.227898  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.227904  774167 command_runner.go:130] >     },
	I0729 20:44:37.227913  774167 command_runner.go:130] >     {
	I0729 20:44:37.227925  774167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 20:44:37.227933  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.227943  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 20:44:37.227951  774167 command_runner.go:130] >       ],
	I0729 20:44:37.227959  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.227974  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 20:44:37.227989  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 20:44:37.227998  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228005  774167 command_runner.go:130] >       "size": "63051080",
	I0729 20:44:37.228014  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.228020  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.228027  774167 command_runner.go:130] >       },
	I0729 20:44:37.228049  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.228057  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.228067  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.228073  774167 command_runner.go:130] >     },
	I0729 20:44:37.228079  774167 command_runner.go:130] >     {
	I0729 20:44:37.228093  774167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 20:44:37.228103  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.228113  774167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 20:44:37.228121  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228131  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.228144  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 20:44:37.228157  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 20:44:37.228167  774167 command_runner.go:130] >       ],
	I0729 20:44:37.228175  774167 command_runner.go:130] >       "size": "750414",
	I0729 20:44:37.228184  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.228192  774167 command_runner.go:130] >         "value": "65535"
	I0729 20:44:37.228198  774167 command_runner.go:130] >       },
	I0729 20:44:37.228217  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.228227  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.228236  774167 command_runner.go:130] >       "pinned": true
	I0729 20:44:37.228242  774167 command_runner.go:130] >     }
	I0729 20:44:37.228250  774167 command_runner.go:130] >   ]
	I0729 20:44:37.228254  774167 command_runner.go:130] > }
	I0729 20:44:37.228475  774167 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:44:37.228489  774167 crio.go:433] Images already preloaded, skipping extraction
	I0729 20:44:37.228548  774167 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:44:37.260712  774167 command_runner.go:130] > {
	I0729 20:44:37.260745  774167 command_runner.go:130] >   "images": [
	I0729 20:44:37.260751  774167 command_runner.go:130] >     {
	I0729 20:44:37.260764  774167 command_runner.go:130] >       "id": "5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f",
	I0729 20:44:37.260772  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260779  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240715-585640e9"
	I0729 20:44:37.260782  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260786  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.260796  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115",
	I0729 20:44:37.260803  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"
	I0729 20:44:37.260810  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260814  774167 command_runner.go:130] >       "size": "87165492",
	I0729 20:44:37.260818  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.260822  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.260830  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.260835  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.260857  774167 command_runner.go:130] >     },
	I0729 20:44:37.260867  774167 command_runner.go:130] >     {
	I0729 20:44:37.260876  774167 command_runner.go:130] >       "id": "6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46",
	I0729 20:44:37.260881  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260889  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240719-e7903573"
	I0729 20:44:37.260898  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260905  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.260917  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9",
	I0729 20:44:37.260932  774167 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"
	I0729 20:44:37.260940  774167 command_runner.go:130] >       ],
	I0729 20:44:37.260945  774167 command_runner.go:130] >       "size": "87174707",
	I0729 20:44:37.260951  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.260957  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.260963  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.260967  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.260973  774167 command_runner.go:130] >     },
	I0729 20:44:37.260976  774167 command_runner.go:130] >     {
	I0729 20:44:37.260984  774167 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0729 20:44:37.260988  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.260997  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0729 20:44:37.261003  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261007  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261016  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0729 20:44:37.261025  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0729 20:44:37.261030  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261035  774167 command_runner.go:130] >       "size": "1363676",
	I0729 20:44:37.261040  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261044  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261053  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261061  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261065  774167 command_runner.go:130] >     },
	I0729 20:44:37.261068  774167 command_runner.go:130] >     {
	I0729 20:44:37.261076  774167 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0729 20:44:37.261081  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261086  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0729 20:44:37.261092  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261099  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261108  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0729 20:44:37.261122  774167 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0729 20:44:37.261128  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261133  774167 command_runner.go:130] >       "size": "31470524",
	I0729 20:44:37.261139  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261143  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261149  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261153  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261158  774167 command_runner.go:130] >     },
	I0729 20:44:37.261162  774167 command_runner.go:130] >     {
	I0729 20:44:37.261168  774167 command_runner.go:130] >       "id": "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4",
	I0729 20:44:37.261174  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261179  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.1"
	I0729 20:44:37.261185  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261189  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261198  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1",
	I0729 20:44:37.261207  774167 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"
	I0729 20:44:37.261216  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261222  774167 command_runner.go:130] >       "size": "61245718",
	I0729 20:44:37.261226  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261231  774167 command_runner.go:130] >       "username": "nonroot",
	I0729 20:44:37.261235  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261241  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261245  774167 command_runner.go:130] >     },
	I0729 20:44:37.261250  774167 command_runner.go:130] >     {
	I0729 20:44:37.261255  774167 command_runner.go:130] >       "id": "3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899",
	I0729 20:44:37.261262  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261266  774167 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.12-0"
	I0729 20:44:37.261271  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261275  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261284  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62",
	I0729 20:44:37.261290  774167 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"
	I0729 20:44:37.261296  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261300  774167 command_runner.go:130] >       "size": "150779692",
	I0729 20:44:37.261306  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261311  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261319  774167 command_runner.go:130] >       },
	I0729 20:44:37.261323  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261329  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261333  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261338  774167 command_runner.go:130] >     },
	I0729 20:44:37.261342  774167 command_runner.go:130] >     {
	I0729 20:44:37.261350  774167 command_runner.go:130] >       "id": "1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d",
	I0729 20:44:37.261355  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261360  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.30.3"
	I0729 20:44:37.261365  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261369  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261379  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c",
	I0729 20:44:37.261388  774167 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"
	I0729 20:44:37.261393  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261397  774167 command_runner.go:130] >       "size": "117609954",
	I0729 20:44:37.261402  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261406  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261412  774167 command_runner.go:130] >       },
	I0729 20:44:37.261416  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261422  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261426  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261431  774167 command_runner.go:130] >     },
	I0729 20:44:37.261435  774167 command_runner.go:130] >     {
	I0729 20:44:37.261442  774167 command_runner.go:130] >       "id": "76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e",
	I0729 20:44:37.261449  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261454  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.30.3"
	I0729 20:44:37.261460  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261463  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261483  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7",
	I0729 20:44:37.261493  774167 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"
	I0729 20:44:37.261496  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261500  774167 command_runner.go:130] >       "size": "112198984",
	I0729 20:44:37.261504  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261510  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261514  774167 command_runner.go:130] >       },
	I0729 20:44:37.261524  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261531  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261536  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261541  774167 command_runner.go:130] >     },
	I0729 20:44:37.261545  774167 command_runner.go:130] >     {
	I0729 20:44:37.261550  774167 command_runner.go:130] >       "id": "55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1",
	I0729 20:44:37.261553  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261569  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.30.3"
	I0729 20:44:37.261575  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261579  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261588  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80",
	I0729 20:44:37.261599  774167 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"
	I0729 20:44:37.261605  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261610  774167 command_runner.go:130] >       "size": "85953945",
	I0729 20:44:37.261616  774167 command_runner.go:130] >       "uid": null,
	I0729 20:44:37.261620  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261625  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261629  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261636  774167 command_runner.go:130] >     },
	I0729 20:44:37.261640  774167 command_runner.go:130] >     {
	I0729 20:44:37.261646  774167 command_runner.go:130] >       "id": "3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2",
	I0729 20:44:37.261652  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261656  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.30.3"
	I0729 20:44:37.261662  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261665  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261676  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266",
	I0729 20:44:37.261685  774167 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"
	I0729 20:44:37.261691  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261695  774167 command_runner.go:130] >       "size": "63051080",
	I0729 20:44:37.261701  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261705  774167 command_runner.go:130] >         "value": "0"
	I0729 20:44:37.261710  774167 command_runner.go:130] >       },
	I0729 20:44:37.261714  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261720  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261724  774167 command_runner.go:130] >       "pinned": false
	I0729 20:44:37.261729  774167 command_runner.go:130] >     },
	I0729 20:44:37.261734  774167 command_runner.go:130] >     {
	I0729 20:44:37.261742  774167 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0729 20:44:37.261748  774167 command_runner.go:130] >       "repoTags": [
	I0729 20:44:37.261753  774167 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0729 20:44:37.261758  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261762  774167 command_runner.go:130] >       "repoDigests": [
	I0729 20:44:37.261770  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0729 20:44:37.261778  774167 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0729 20:44:37.261784  774167 command_runner.go:130] >       ],
	I0729 20:44:37.261788  774167 command_runner.go:130] >       "size": "750414",
	I0729 20:44:37.261794  774167 command_runner.go:130] >       "uid": {
	I0729 20:44:37.261798  774167 command_runner.go:130] >         "value": "65535"
	I0729 20:44:37.261803  774167 command_runner.go:130] >       },
	I0729 20:44:37.261807  774167 command_runner.go:130] >       "username": "",
	I0729 20:44:37.261811  774167 command_runner.go:130] >       "spec": null,
	I0729 20:44:37.261816  774167 command_runner.go:130] >       "pinned": true
	I0729 20:44:37.261820  774167 command_runner.go:130] >     }
	I0729 20:44:37.261823  774167 command_runner.go:130] >   ]
	I0729 20:44:37.261827  774167 command_runner.go:130] > }
	I0729 20:44:37.261960  774167 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 20:44:37.261975  774167 cache_images.go:84] Images are preloaded, skipping loading
	I0729 20:44:37.261983  774167 kubeadm.go:934] updating node { 192.168.39.229 8443 v1.30.3 crio true true} ...
	I0729 20:44:37.262100  774167 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-151054 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:44:37.262172  774167 ssh_runner.go:195] Run: crio config
	I0729 20:44:37.302846  774167 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0729 20:44:37.302883  774167 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0729 20:44:37.302894  774167 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0729 20:44:37.302899  774167 command_runner.go:130] > #
	I0729 20:44:37.302909  774167 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0729 20:44:37.302919  774167 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0729 20:44:37.302928  774167 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0729 20:44:37.302953  774167 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0729 20:44:37.302965  774167 command_runner.go:130] > # reload'.
	I0729 20:44:37.302974  774167 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0729 20:44:37.302983  774167 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0729 20:44:37.302994  774167 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0729 20:44:37.303005  774167 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0729 20:44:37.303014  774167 command_runner.go:130] > [crio]
	I0729 20:44:37.303023  774167 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0729 20:44:37.303034  774167 command_runner.go:130] > # containers images, in this directory.
	I0729 20:44:37.303047  774167 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0729 20:44:37.303061  774167 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0729 20:44:37.303152  774167 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0729 20:44:37.303178  774167 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0729 20:44:37.303408  774167 command_runner.go:130] > # imagestore = ""
	I0729 20:44:37.303432  774167 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0729 20:44:37.303441  774167 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0729 20:44:37.303513  774167 command_runner.go:130] > storage_driver = "overlay"
	I0729 20:44:37.303534  774167 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0729 20:44:37.303545  774167 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0729 20:44:37.303552  774167 command_runner.go:130] > storage_option = [
	I0729 20:44:37.303669  774167 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0729 20:44:37.303686  774167 command_runner.go:130] > ]
	I0729 20:44:37.303697  774167 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0729 20:44:37.303715  774167 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0729 20:44:37.303995  774167 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0729 20:44:37.304008  774167 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0729 20:44:37.304017  774167 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0729 20:44:37.304024  774167 command_runner.go:130] > # always happen on a node reboot
	I0729 20:44:37.304241  774167 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0729 20:44:37.304272  774167 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0729 20:44:37.304284  774167 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0729 20:44:37.304289  774167 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0729 20:44:37.304351  774167 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0729 20:44:37.304369  774167 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0729 20:44:37.304382  774167 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0729 20:44:37.304596  774167 command_runner.go:130] > # internal_wipe = true
	I0729 20:44:37.304620  774167 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0729 20:44:37.304631  774167 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0729 20:44:37.304834  774167 command_runner.go:130] > # internal_repair = false
	I0729 20:44:37.304856  774167 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0729 20:44:37.304865  774167 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0729 20:44:37.304873  774167 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0729 20:44:37.305067  774167 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0729 20:44:37.305079  774167 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0729 20:44:37.305085  774167 command_runner.go:130] > [crio.api]
	I0729 20:44:37.305093  774167 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0729 20:44:37.305310  774167 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0729 20:44:37.305323  774167 command_runner.go:130] > # IP address on which the stream server will listen.
	I0729 20:44:37.305527  774167 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0729 20:44:37.305543  774167 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0729 20:44:37.305556  774167 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0729 20:44:37.305742  774167 command_runner.go:130] > # stream_port = "0"
	I0729 20:44:37.305754  774167 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0729 20:44:37.306000  774167 command_runner.go:130] > # stream_enable_tls = false
	I0729 20:44:37.306014  774167 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0729 20:44:37.306161  774167 command_runner.go:130] > # stream_idle_timeout = ""
	I0729 20:44:37.306173  774167 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0729 20:44:37.306182  774167 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0729 20:44:37.306189  774167 command_runner.go:130] > # minutes.
	I0729 20:44:37.306371  774167 command_runner.go:130] > # stream_tls_cert = ""
	I0729 20:44:37.306402  774167 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0729 20:44:37.306416  774167 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0729 20:44:37.306516  774167 command_runner.go:130] > # stream_tls_key = ""
	I0729 20:44:37.306529  774167 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0729 20:44:37.306543  774167 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0729 20:44:37.306570  774167 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0729 20:44:37.306841  774167 command_runner.go:130] > # stream_tls_ca = ""
	I0729 20:44:37.306864  774167 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 20:44:37.306872  774167 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0729 20:44:37.306884  774167 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0729 20:44:37.306896  774167 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0729 20:44:37.306906  774167 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0729 20:44:37.306917  774167 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0729 20:44:37.306925  774167 command_runner.go:130] > [crio.runtime]
	I0729 20:44:37.306936  774167 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0729 20:44:37.306947  774167 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0729 20:44:37.306959  774167 command_runner.go:130] > # "nofile=1024:2048"
	I0729 20:44:37.306969  774167 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0729 20:44:37.307027  774167 command_runner.go:130] > # default_ulimits = [
	I0729 20:44:37.307134  774167 command_runner.go:130] > # ]
	I0729 20:44:37.307151  774167 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0729 20:44:37.307409  774167 command_runner.go:130] > # no_pivot = false
	I0729 20:44:37.307423  774167 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0729 20:44:37.307432  774167 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0729 20:44:37.307853  774167 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0729 20:44:37.307869  774167 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0729 20:44:37.307877  774167 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0729 20:44:37.307891  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 20:44:37.308152  774167 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0729 20:44:37.308164  774167 command_runner.go:130] > # Cgroup setting for conmon
	I0729 20:44:37.308175  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0729 20:44:37.308848  774167 command_runner.go:130] > conmon_cgroup = "pod"
	I0729 20:44:37.308866  774167 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0729 20:44:37.308874  774167 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0729 20:44:37.308884  774167 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0729 20:44:37.308894  774167 command_runner.go:130] > conmon_env = [
	I0729 20:44:37.308988  774167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 20:44:37.309052  774167 command_runner.go:130] > ]
	I0729 20:44:37.309066  774167 command_runner.go:130] > # Additional environment variables to set for all the
	I0729 20:44:37.309074  774167 command_runner.go:130] > # containers. These are overridden if set in the
	I0729 20:44:37.309082  774167 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0729 20:44:37.309145  774167 command_runner.go:130] > # default_env = [
	I0729 20:44:37.309255  774167 command_runner.go:130] > # ]
	I0729 20:44:37.309268  774167 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0729 20:44:37.309280  774167 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0729 20:44:37.309497  774167 command_runner.go:130] > # selinux = false
	I0729 20:44:37.309511  774167 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0729 20:44:37.309520  774167 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0729 20:44:37.309529  774167 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0729 20:44:37.309673  774167 command_runner.go:130] > # seccomp_profile = ""
	I0729 20:44:37.309685  774167 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0729 20:44:37.309694  774167 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0729 20:44:37.309704  774167 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0729 20:44:37.309715  774167 command_runner.go:130] > # which might increase security.
	I0729 20:44:37.309724  774167 command_runner.go:130] > # This option is currently deprecated,
	I0729 20:44:37.309736  774167 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0729 20:44:37.309807  774167 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0729 20:44:37.309825  774167 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0729 20:44:37.309835  774167 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0729 20:44:37.309848  774167 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0729 20:44:37.309860  774167 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0729 20:44:37.309871  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.310105  774167 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0729 20:44:37.310118  774167 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0729 20:44:37.310126  774167 command_runner.go:130] > # the cgroup blockio controller.
	I0729 20:44:37.310297  774167 command_runner.go:130] > # blockio_config_file = ""
	I0729 20:44:37.310311  774167 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0729 20:44:37.310317  774167 command_runner.go:130] > # blockio parameters.
	I0729 20:44:37.310528  774167 command_runner.go:130] > # blockio_reload = false
	I0729 20:44:37.310541  774167 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0729 20:44:37.310548  774167 command_runner.go:130] > # irqbalance daemon.
	I0729 20:44:37.310760  774167 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0729 20:44:37.310772  774167 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0729 20:44:37.310782  774167 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0729 20:44:37.310794  774167 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0729 20:44:37.311081  774167 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0729 20:44:37.311103  774167 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0729 20:44:37.311113  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.311256  774167 command_runner.go:130] > # rdt_config_file = ""
	I0729 20:44:37.311272  774167 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0729 20:44:37.311356  774167 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0729 20:44:37.311380  774167 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0729 20:44:37.311514  774167 command_runner.go:130] > # separate_pull_cgroup = ""
	I0729 20:44:37.311529  774167 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0729 20:44:37.311542  774167 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0729 20:44:37.311551  774167 command_runner.go:130] > # will be added.
	I0729 20:44:37.311648  774167 command_runner.go:130] > # default_capabilities = [
	I0729 20:44:37.311801  774167 command_runner.go:130] > # 	"CHOWN",
	I0729 20:44:37.312075  774167 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0729 20:44:37.312088  774167 command_runner.go:130] > # 	"FSETID",
	I0729 20:44:37.312097  774167 command_runner.go:130] > # 	"FOWNER",
	I0729 20:44:37.312102  774167 command_runner.go:130] > # 	"SETGID",
	I0729 20:44:37.312109  774167 command_runner.go:130] > # 	"SETUID",
	I0729 20:44:37.312115  774167 command_runner.go:130] > # 	"SETPCAP",
	I0729 20:44:37.312125  774167 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0729 20:44:37.312132  774167 command_runner.go:130] > # 	"KILL",
	I0729 20:44:37.312151  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312173  774167 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0729 20:44:37.312185  774167 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0729 20:44:37.312194  774167 command_runner.go:130] > # add_inheritable_capabilities = false
	I0729 20:44:37.312208  774167 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0729 20:44:37.312221  774167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 20:44:37.312234  774167 command_runner.go:130] > default_sysctls = [
	I0729 20:44:37.312242  774167 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I0729 20:44:37.312250  774167 command_runner.go:130] > ]
	I0729 20:44:37.312258  774167 command_runner.go:130] > # List of devices on the host that a
	I0729 20:44:37.312283  774167 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0729 20:44:37.312294  774167 command_runner.go:130] > # allowed_devices = [
	I0729 20:44:37.312306  774167 command_runner.go:130] > # 	"/dev/fuse",
	I0729 20:44:37.312317  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312325  774167 command_runner.go:130] > # List of additional devices. specified as
	I0729 20:44:37.312339  774167 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0729 20:44:37.312351  774167 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0729 20:44:37.312362  774167 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0729 20:44:37.312372  774167 command_runner.go:130] > # additional_devices = [
	I0729 20:44:37.312384  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312393  774167 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0729 20:44:37.312414  774167 command_runner.go:130] > # cdi_spec_dirs = [
	I0729 20:44:37.312424  774167 command_runner.go:130] > # 	"/etc/cdi",
	I0729 20:44:37.312430  774167 command_runner.go:130] > # 	"/var/run/cdi",
	I0729 20:44:37.312436  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312447  774167 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0729 20:44:37.312460  774167 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0729 20:44:37.312470  774167 command_runner.go:130] > # Defaults to false.
	I0729 20:44:37.312478  774167 command_runner.go:130] > # device_ownership_from_security_context = false
	I0729 20:44:37.312492  774167 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0729 20:44:37.312505  774167 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0729 20:44:37.312516  774167 command_runner.go:130] > # hooks_dir = [
	I0729 20:44:37.312524  774167 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0729 20:44:37.312538  774167 command_runner.go:130] > # ]
	I0729 20:44:37.312551  774167 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0729 20:44:37.312565  774167 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0729 20:44:37.312574  774167 command_runner.go:130] > # its default mounts from the following two files:
	I0729 20:44:37.312583  774167 command_runner.go:130] > #
	I0729 20:44:37.312592  774167 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0729 20:44:37.312606  774167 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0729 20:44:37.312617  774167 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0729 20:44:37.312624  774167 command_runner.go:130] > #
	I0729 20:44:37.312633  774167 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0729 20:44:37.312644  774167 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0729 20:44:37.312654  774167 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0729 20:44:37.312665  774167 command_runner.go:130] > #      only add mounts it finds in this file.
	I0729 20:44:37.312673  774167 command_runner.go:130] > #
	I0729 20:44:37.312681  774167 command_runner.go:130] > # default_mounts_file = ""
	I0729 20:44:37.312697  774167 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0729 20:44:37.312712  774167 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0729 20:44:37.312721  774167 command_runner.go:130] > pids_limit = 1024
	I0729 20:44:37.312730  774167 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0729 20:44:37.312744  774167 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0729 20:44:37.312755  774167 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0729 20:44:37.312772  774167 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0729 20:44:37.312781  774167 command_runner.go:130] > # log_size_max = -1
	I0729 20:44:37.312793  774167 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0729 20:44:37.312803  774167 command_runner.go:130] > # log_to_journald = false
	I0729 20:44:37.312814  774167 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0729 20:44:37.312825  774167 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0729 20:44:37.312844  774167 command_runner.go:130] > # Path to directory for container attach sockets.
	I0729 20:44:37.312855  774167 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0729 20:44:37.312867  774167 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0729 20:44:37.312873  774167 command_runner.go:130] > # bind_mount_prefix = ""
	I0729 20:44:37.312884  774167 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0729 20:44:37.312892  774167 command_runner.go:130] > # read_only = false
	I0729 20:44:37.312901  774167 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0729 20:44:37.312914  774167 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0729 20:44:37.312923  774167 command_runner.go:130] > # live configuration reload.
	I0729 20:44:37.312928  774167 command_runner.go:130] > # log_level = "info"
	I0729 20:44:37.312937  774167 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0729 20:44:37.312945  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.312954  774167 command_runner.go:130] > # log_filter = ""
	I0729 20:44:37.312964  774167 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0729 20:44:37.312975  774167 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0729 20:44:37.312983  774167 command_runner.go:130] > # separated by comma.
	I0729 20:44:37.312994  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313004  774167 command_runner.go:130] > # uid_mappings = ""
	I0729 20:44:37.313015  774167 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0729 20:44:37.313027  774167 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0729 20:44:37.313037  774167 command_runner.go:130] > # separated by comma.
	I0729 20:44:37.313050  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313061  774167 command_runner.go:130] > # gid_mappings = ""
	I0729 20:44:37.313073  774167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0729 20:44:37.313087  774167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 20:44:37.313100  774167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 20:44:37.313116  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313125  774167 command_runner.go:130] > # minimum_mappable_uid = -1
	I0729 20:44:37.313136  774167 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0729 20:44:37.313149  774167 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0729 20:44:37.313162  774167 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0729 20:44:37.313177  774167 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0729 20:44:37.313186  774167 command_runner.go:130] > # minimum_mappable_gid = -1
	I0729 20:44:37.313197  774167 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0729 20:44:37.313210  774167 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0729 20:44:37.313223  774167 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0729 20:44:37.313240  774167 command_runner.go:130] > # ctr_stop_timeout = 30
	I0729 20:44:37.313251  774167 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0729 20:44:37.313266  774167 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0729 20:44:37.313278  774167 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0729 20:44:37.313285  774167 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0729 20:44:37.313295  774167 command_runner.go:130] > drop_infra_ctr = false
	I0729 20:44:37.313310  774167 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0729 20:44:37.313321  774167 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0729 20:44:37.313335  774167 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0729 20:44:37.313345  774167 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0729 20:44:37.313356  774167 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0729 20:44:37.313369  774167 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0729 20:44:37.313382  774167 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0729 20:44:37.313392  774167 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0729 20:44:37.313402  774167 command_runner.go:130] > # shared_cpuset = ""
	I0729 20:44:37.313413  774167 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0729 20:44:37.313425  774167 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0729 20:44:37.313435  774167 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0729 20:44:37.313446  774167 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0729 20:44:37.313456  774167 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0729 20:44:37.313465  774167 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0729 20:44:37.313479  774167 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0729 20:44:37.313486  774167 command_runner.go:130] > # enable_criu_support = false
	I0729 20:44:37.313498  774167 command_runner.go:130] > # Enable/disable the generation of the container,
	I0729 20:44:37.313512  774167 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0729 20:44:37.313522  774167 command_runner.go:130] > # enable_pod_events = false
	I0729 20:44:37.313539  774167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 20:44:37.313551  774167 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0729 20:44:37.313563  774167 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0729 20:44:37.313571  774167 command_runner.go:130] > # default_runtime = "runc"
	I0729 20:44:37.313581  774167 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0729 20:44:37.313597  774167 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0729 20:44:37.313614  774167 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0729 20:44:37.313624  774167 command_runner.go:130] > # creation as a file is not desired either.
	I0729 20:44:37.313639  774167 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0729 20:44:37.313655  774167 command_runner.go:130] > # the hostname is being managed dynamically.
	I0729 20:44:37.313665  774167 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0729 20:44:37.313672  774167 command_runner.go:130] > # ]
	I0729 20:44:37.313680  774167 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0729 20:44:37.313692  774167 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0729 20:44:37.313704  774167 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0729 20:44:37.313714  774167 command_runner.go:130] > # Each entry in the table should follow the format:
	I0729 20:44:37.313718  774167 command_runner.go:130] > #
	I0729 20:44:37.313726  774167 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0729 20:44:37.313737  774167 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0729 20:44:37.313800  774167 command_runner.go:130] > # runtime_type = "oci"
	I0729 20:44:37.313811  774167 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0729 20:44:37.313823  774167 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0729 20:44:37.313830  774167 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0729 20:44:37.313841  774167 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0729 20:44:37.313850  774167 command_runner.go:130] > # monitor_env = []
	I0729 20:44:37.313860  774167 command_runner.go:130] > # privileged_without_host_devices = false
	I0729 20:44:37.313869  774167 command_runner.go:130] > # allowed_annotations = []
	I0729 20:44:37.313877  774167 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0729 20:44:37.313885  774167 command_runner.go:130] > # Where:
	I0729 20:44:37.313893  774167 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0729 20:44:37.313908  774167 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0729 20:44:37.313923  774167 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0729 20:44:37.313934  774167 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0729 20:44:37.313943  774167 command_runner.go:130] > #   in $PATH.
	I0729 20:44:37.313962  774167 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0729 20:44:37.313974  774167 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0729 20:44:37.313986  774167 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0729 20:44:37.313994  774167 command_runner.go:130] > #   state.
	I0729 20:44:37.314004  774167 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0729 20:44:37.314016  774167 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0729 20:44:37.314026  774167 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0729 20:44:37.314038  774167 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0729 20:44:37.314051  774167 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0729 20:44:37.314065  774167 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0729 20:44:37.314077  774167 command_runner.go:130] > #   The currently recognized values are:
	I0729 20:44:37.314090  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0729 20:44:37.314104  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0729 20:44:37.314119  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0729 20:44:37.314132  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0729 20:44:37.314146  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0729 20:44:37.314159  774167 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0729 20:44:37.314172  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0729 20:44:37.314186  774167 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0729 20:44:37.314198  774167 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0729 20:44:37.314210  774167 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0729 20:44:37.314221  774167 command_runner.go:130] > #   deprecated option "conmon".
	I0729 20:44:37.314234  774167 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0729 20:44:37.314245  774167 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0729 20:44:37.314258  774167 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0729 20:44:37.314270  774167 command_runner.go:130] > #   should be moved to the container's cgroup
	I0729 20:44:37.314282  774167 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0729 20:44:37.314293  774167 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0729 20:44:37.314306  774167 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0729 20:44:37.314317  774167 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0729 20:44:37.314324  774167 command_runner.go:130] > #
	I0729 20:44:37.314333  774167 command_runner.go:130] > # Using the seccomp notifier feature:
	I0729 20:44:37.314341  774167 command_runner.go:130] > #
	I0729 20:44:37.314352  774167 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0729 20:44:37.314364  774167 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0729 20:44:37.314370  774167 command_runner.go:130] > #
	I0729 20:44:37.314386  774167 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0729 20:44:37.314397  774167 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0729 20:44:37.314404  774167 command_runner.go:130] > #
	I0729 20:44:37.314414  774167 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0729 20:44:37.314423  774167 command_runner.go:130] > # feature.
	I0729 20:44:37.314430  774167 command_runner.go:130] > #
	I0729 20:44:37.314439  774167 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0729 20:44:37.314452  774167 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0729 20:44:37.314466  774167 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0729 20:44:37.314478  774167 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0729 20:44:37.314490  774167 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0729 20:44:37.314498  774167 command_runner.go:130] > #
	I0729 20:44:37.314511  774167 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0729 20:44:37.314527  774167 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0729 20:44:37.314539  774167 command_runner.go:130] > #
	I0729 20:44:37.314550  774167 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0729 20:44:37.314561  774167 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0729 20:44:37.314568  774167 command_runner.go:130] > #
	I0729 20:44:37.314579  774167 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0729 20:44:37.314590  774167 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0729 20:44:37.314598  774167 command_runner.go:130] > # limitation.
	I0729 20:44:37.314604  774167 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0729 20:44:37.314612  774167 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0729 20:44:37.314620  774167 command_runner.go:130] > runtime_type = "oci"
	I0729 20:44:37.314629  774167 command_runner.go:130] > runtime_root = "/run/runc"
	I0729 20:44:37.314636  774167 command_runner.go:130] > runtime_config_path = ""
	I0729 20:44:37.314646  774167 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0729 20:44:37.314654  774167 command_runner.go:130] > monitor_cgroup = "pod"
	I0729 20:44:37.314662  774167 command_runner.go:130] > monitor_exec_cgroup = ""
	I0729 20:44:37.314670  774167 command_runner.go:130] > monitor_env = [
	I0729 20:44:37.314678  774167 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0729 20:44:37.314685  774167 command_runner.go:130] > ]
	I0729 20:44:37.314691  774167 command_runner.go:130] > privileged_without_host_devices = false
	I0729 20:44:37.314703  774167 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0729 20:44:37.314713  774167 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0729 20:44:37.314725  774167 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0729 20:44:37.314746  774167 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0729 20:44:37.314759  774167 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0729 20:44:37.314770  774167 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0729 20:44:37.314785  774167 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0729 20:44:37.314798  774167 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0729 20:44:37.314809  774167 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0729 20:44:37.314821  774167 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0729 20:44:37.314826  774167 command_runner.go:130] > # Example:
	I0729 20:44:37.314833  774167 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0729 20:44:37.314841  774167 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0729 20:44:37.314848  774167 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0729 20:44:37.314855  774167 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0729 20:44:37.314860  774167 command_runner.go:130] > # cpuset = 0
	I0729 20:44:37.314865  774167 command_runner.go:130] > # cpushares = "0-1"
	I0729 20:44:37.314870  774167 command_runner.go:130] > # Where:
	I0729 20:44:37.314879  774167 command_runner.go:130] > # The workload name is workload-type.
	I0729 20:44:37.314888  774167 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0729 20:44:37.314896  774167 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0729 20:44:37.314905  774167 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0729 20:44:37.314916  774167 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0729 20:44:37.314925  774167 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0729 20:44:37.314932  774167 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0729 20:44:37.314942  774167 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0729 20:44:37.314949  774167 command_runner.go:130] > # Default value is set to true
	I0729 20:44:37.314956  774167 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0729 20:44:37.314963  774167 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0729 20:44:37.314969  774167 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0729 20:44:37.314975  774167 command_runner.go:130] > # Default value is set to 'false'
	I0729 20:44:37.314981  774167 command_runner.go:130] > # disable_hostport_mapping = false
	I0729 20:44:37.314989  774167 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0729 20:44:37.314993  774167 command_runner.go:130] > #
	I0729 20:44:37.315002  774167 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0729 20:44:37.315011  774167 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0729 20:44:37.315019  774167 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0729 20:44:37.315028  774167 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0729 20:44:37.315040  774167 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0729 20:44:37.315061  774167 command_runner.go:130] > [crio.image]
	I0729 20:44:37.315072  774167 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0729 20:44:37.315080  774167 command_runner.go:130] > # default_transport = "docker://"
	I0729 20:44:37.315088  774167 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0729 20:44:37.315099  774167 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0729 20:44:37.315107  774167 command_runner.go:130] > # global_auth_file = ""
	I0729 20:44:37.315116  774167 command_runner.go:130] > # The image used to instantiate infra containers.
	I0729 20:44:37.315126  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.315137  774167 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0729 20:44:37.315161  774167 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0729 20:44:37.315172  774167 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0729 20:44:37.315179  774167 command_runner.go:130] > # This option supports live configuration reload.
	I0729 20:44:37.315188  774167 command_runner.go:130] > # pause_image_auth_file = ""
	I0729 20:44:37.315196  774167 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0729 20:44:37.315204  774167 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0729 20:44:37.315217  774167 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0729 20:44:37.315228  774167 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0729 20:44:37.315238  774167 command_runner.go:130] > # pause_command = "/pause"
	I0729 20:44:37.315250  774167 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0729 20:44:37.315260  774167 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0729 20:44:37.315271  774167 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0729 20:44:37.315281  774167 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0729 20:44:37.315291  774167 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0729 20:44:37.315302  774167 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0729 20:44:37.315310  774167 command_runner.go:130] > # pinned_images = [
	I0729 20:44:37.315315  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315324  774167 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0729 20:44:37.315336  774167 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0729 20:44:37.315347  774167 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0729 20:44:37.315358  774167 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0729 20:44:37.315371  774167 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0729 20:44:37.315379  774167 command_runner.go:130] > # signature_policy = ""
	I0729 20:44:37.315389  774167 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0729 20:44:37.315402  774167 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0729 20:44:37.315412  774167 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0729 20:44:37.315423  774167 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0729 20:44:37.315441  774167 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0729 20:44:37.315451  774167 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0729 20:44:37.315461  774167 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0729 20:44:37.315473  774167 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0729 20:44:37.315482  774167 command_runner.go:130] > # changing them here.
	I0729 20:44:37.315488  774167 command_runner.go:130] > # insecure_registries = [
	I0729 20:44:37.315496  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315507  774167 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0729 20:44:37.315517  774167 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0729 20:44:37.315526  774167 command_runner.go:130] > # image_volumes = "mkdir"
	I0729 20:44:37.315539  774167 command_runner.go:130] > # Temporary directory to use for storing big files
	I0729 20:44:37.315549  774167 command_runner.go:130] > # big_files_temporary_dir = ""
	I0729 20:44:37.315559  774167 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0729 20:44:37.315567  774167 command_runner.go:130] > # CNI plugins.
	I0729 20:44:37.315573  774167 command_runner.go:130] > [crio.network]
	I0729 20:44:37.315585  774167 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0729 20:44:37.315601  774167 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0729 20:44:37.315610  774167 command_runner.go:130] > # cni_default_network = ""
	I0729 20:44:37.315621  774167 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0729 20:44:37.315633  774167 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0729 20:44:37.315643  774167 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0729 20:44:37.315652  774167 command_runner.go:130] > # plugin_dirs = [
	I0729 20:44:37.315658  774167 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0729 20:44:37.315666  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315674  774167 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0729 20:44:37.315682  774167 command_runner.go:130] > [crio.metrics]
	I0729 20:44:37.315690  774167 command_runner.go:130] > # Globally enable or disable metrics support.
	I0729 20:44:37.315699  774167 command_runner.go:130] > enable_metrics = true
	I0729 20:44:37.315706  774167 command_runner.go:130] > # Specify enabled metrics collectors.
	I0729 20:44:37.315719  774167 command_runner.go:130] > # Per default all metrics are enabled.
	I0729 20:44:37.315732  774167 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0729 20:44:37.315745  774167 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0729 20:44:37.315756  774167 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0729 20:44:37.315765  774167 command_runner.go:130] > # metrics_collectors = [
	I0729 20:44:37.315770  774167 command_runner.go:130] > # 	"operations",
	I0729 20:44:37.315777  774167 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0729 20:44:37.315782  774167 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0729 20:44:37.315788  774167 command_runner.go:130] > # 	"operations_errors",
	I0729 20:44:37.315792  774167 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0729 20:44:37.315796  774167 command_runner.go:130] > # 	"image_pulls_by_name",
	I0729 20:44:37.315801  774167 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0729 20:44:37.315807  774167 command_runner.go:130] > # 	"image_pulls_failures",
	I0729 20:44:37.315811  774167 command_runner.go:130] > # 	"image_pulls_successes",
	I0729 20:44:37.315818  774167 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0729 20:44:37.315822  774167 command_runner.go:130] > # 	"image_layer_reuse",
	I0729 20:44:37.315828  774167 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0729 20:44:37.315832  774167 command_runner.go:130] > # 	"containers_oom_total",
	I0729 20:44:37.315838  774167 command_runner.go:130] > # 	"containers_oom",
	I0729 20:44:37.315843  774167 command_runner.go:130] > # 	"processes_defunct",
	I0729 20:44:37.315848  774167 command_runner.go:130] > # 	"operations_total",
	I0729 20:44:37.315852  774167 command_runner.go:130] > # 	"operations_latency_seconds",
	I0729 20:44:37.315864  774167 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0729 20:44:37.315873  774167 command_runner.go:130] > # 	"operations_errors_total",
	I0729 20:44:37.315880  774167 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0729 20:44:37.315890  774167 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0729 20:44:37.315896  774167 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0729 20:44:37.315905  774167 command_runner.go:130] > # 	"image_pulls_success_total",
	I0729 20:44:37.315912  774167 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0729 20:44:37.315922  774167 command_runner.go:130] > # 	"containers_oom_count_total",
	I0729 20:44:37.315932  774167 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0729 20:44:37.315943  774167 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0729 20:44:37.315954  774167 command_runner.go:130] > # ]
	I0729 20:44:37.315965  774167 command_runner.go:130] > # The port on which the metrics server will listen.
	I0729 20:44:37.315974  774167 command_runner.go:130] > # metrics_port = 9090
	I0729 20:44:37.315982  774167 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0729 20:44:37.315990  774167 command_runner.go:130] > # metrics_socket = ""
	I0729 20:44:37.315995  774167 command_runner.go:130] > # The certificate for the secure metrics server.
	I0729 20:44:37.316001  774167 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0729 20:44:37.316009  774167 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0729 20:44:37.316014  774167 command_runner.go:130] > # certificate on any modification event.
	I0729 20:44:37.316020  774167 command_runner.go:130] > # metrics_cert = ""
	I0729 20:44:37.316025  774167 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0729 20:44:37.316049  774167 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0729 20:44:37.316059  774167 command_runner.go:130] > # metrics_key = ""
	I0729 20:44:37.316068  774167 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0729 20:44:37.316076  774167 command_runner.go:130] > [crio.tracing]
	I0729 20:44:37.316086  774167 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0729 20:44:37.316095  774167 command_runner.go:130] > # enable_tracing = false
	I0729 20:44:37.316103  774167 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0729 20:44:37.316111  774167 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0729 20:44:37.316117  774167 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0729 20:44:37.316124  774167 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0729 20:44:37.316128  774167 command_runner.go:130] > # CRI-O NRI configuration.
	I0729 20:44:37.316132  774167 command_runner.go:130] > [crio.nri]
	I0729 20:44:37.316135  774167 command_runner.go:130] > # Globally enable or disable NRI.
	I0729 20:44:37.316139  774167 command_runner.go:130] > # enable_nri = false
	I0729 20:44:37.316144  774167 command_runner.go:130] > # NRI socket to listen on.
	I0729 20:44:37.316148  774167 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0729 20:44:37.316152  774167 command_runner.go:130] > # NRI plugin directory to use.
	I0729 20:44:37.316156  774167 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0729 20:44:37.316161  774167 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0729 20:44:37.316168  774167 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0729 20:44:37.316174  774167 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0729 20:44:37.316180  774167 command_runner.go:130] > # nri_disable_connections = false
	I0729 20:44:37.316186  774167 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0729 20:44:37.316193  774167 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0729 20:44:37.316198  774167 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0729 20:44:37.316204  774167 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0729 20:44:37.316210  774167 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0729 20:44:37.316214  774167 command_runner.go:130] > [crio.stats]
	I0729 20:44:37.316220  774167 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0729 20:44:37.316227  774167 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0729 20:44:37.316231  774167 command_runner.go:130] > # stats_collection_period = 0
	I0729 20:44:37.316256  774167 command_runner.go:130] ! time="2024-07-29 20:44:37.267158551Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0729 20:44:37.316271  774167 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0729 20:44:37.316389  774167 cni.go:84] Creating CNI manager for ""
	I0729 20:44:37.316399  774167 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0729 20:44:37.316409  774167 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:44:37.316431  774167 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.229 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-151054 NodeName:multinode-151054 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:44:37.316576  774167 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-151054"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.229
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.229"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:44:37.316640  774167 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 20:44:37.326094  774167 command_runner.go:130] > kubeadm
	I0729 20:44:37.326115  774167 command_runner.go:130] > kubectl
	I0729 20:44:37.326120  774167 command_runner.go:130] > kubelet
	I0729 20:44:37.326140  774167 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:44:37.326201  774167 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 20:44:37.334826  774167 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0729 20:44:37.350903  774167 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:44:37.366907  774167 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0729 20:44:37.381984  774167 ssh_runner.go:195] Run: grep 192.168.39.229	control-plane.minikube.internal$ /etc/hosts
	I0729 20:44:37.385561  774167 command_runner.go:130] > 192.168.39.229	control-plane.minikube.internal
	I0729 20:44:37.385643  774167 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:44:37.523722  774167 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:44:37.538224  774167 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054 for IP: 192.168.39.229
	I0729 20:44:37.538247  774167 certs.go:194] generating shared ca certs ...
	I0729 20:44:37.538270  774167 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:44:37.538466  774167 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:44:37.538506  774167 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:44:37.538515  774167 certs.go:256] generating profile certs ...
	I0729 20:44:37.538601  774167 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/client.key
	I0729 20:44:37.538657  774167 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key.d3ff0f9a
	I0729 20:44:37.538694  774167 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key
	I0729 20:44:37.538705  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0729 20:44:37.538717  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0729 20:44:37.538727  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0729 20:44:37.538737  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0729 20:44:37.538746  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0729 20:44:37.538781  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0729 20:44:37.538795  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0729 20:44:37.538804  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0729 20:44:37.538863  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:44:37.538892  774167 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:44:37.538902  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:44:37.538924  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:44:37.538951  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:44:37.538972  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:44:37.539008  774167 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:44:37.539034  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem -> /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.539048  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.539064  774167 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.539687  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:44:37.563311  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:44:37.586579  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:44:37.608648  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:44:37.641739  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0729 20:44:37.692533  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 20:44:37.723150  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:44:37.753983  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/multinode-151054/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 20:44:37.776664  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:44:37.797720  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:44:37.826560  774167 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:44:37.852340  774167 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:44:37.872885  774167 ssh_runner.go:195] Run: openssl version
	I0729 20:44:37.879821  774167 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0729 20:44:37.880177  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:44:37.899542  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.904816  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.906162  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.906228  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:44:37.917373  774167 command_runner.go:130] > 51391683
	I0729 20:44:37.917630  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:44:37.932755  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:44:37.944419  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948514  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948646  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.948694  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:44:37.953766  774167 command_runner.go:130] > 3ec20f2e
	I0729 20:44:37.953980  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:44:37.964434  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:44:37.978359  774167 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982532  774167 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982717  774167 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.982774  774167 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:44:37.987801  774167 command_runner.go:130] > b5213941
	I0729 20:44:37.987969  774167 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:44:37.997078  774167 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:44:38.001192  774167 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:44:38.001219  774167 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0729 20:44:38.001229  774167 command_runner.go:130] > Device: 253,1	Inode: 4197931     Links: 1
	I0729 20:44:38.001239  774167 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0729 20:44:38.001250  774167 command_runner.go:130] > Access: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001262  774167 command_runner.go:130] > Modify: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001270  774167 command_runner.go:130] > Change: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001278  774167 command_runner.go:130] >  Birth: 2024-07-29 20:37:48.112504643 +0000
	I0729 20:44:38.001330  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:44:38.006805  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.006883  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:44:38.012200  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.012284  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:44:38.017465  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.017631  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:44:38.022748  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.022808  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:44:38.027750  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.027939  774167 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:44:38.032992  774167 command_runner.go:130] > Certificate will not expire
	I0729 20:44:38.033065  774167 kubeadm.go:392] StartCluster: {Name:multinode-151054 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-151054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.229 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.98 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.252 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:44:38.033229  774167 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:44:38.033295  774167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:44:38.066384  774167 command_runner.go:130] > c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a
	I0729 20:44:38.066412  774167 command_runner.go:130] > b2898ece6d62716cb34a0d2298ea9287f4e8128003a938b04d05749163588a62
	I0729 20:44:38.066421  774167 command_runner.go:130] > 13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca
	I0729 20:44:38.066432  774167 command_runner.go:130] > ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0
	I0729 20:44:38.066442  774167 command_runner.go:130] > 8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0
	I0729 20:44:38.066449  774167 command_runner.go:130] > bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8
	I0729 20:44:38.066455  774167 command_runner.go:130] > 888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104
	I0729 20:44:38.066462  774167 command_runner.go:130] > 4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1
	I0729 20:44:38.066467  774167 command_runner.go:130] > 1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77
	I0729 20:44:38.067875  774167 cri.go:89] found id: "c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a"
	I0729 20:44:38.067900  774167 cri.go:89] found id: "b2898ece6d62716cb34a0d2298ea9287f4e8128003a938b04d05749163588a62"
	I0729 20:44:38.067910  774167 cri.go:89] found id: "13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca"
	I0729 20:44:38.067915  774167 cri.go:89] found id: "ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0"
	I0729 20:44:38.067920  774167 cri.go:89] found id: "8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0"
	I0729 20:44:38.067926  774167 cri.go:89] found id: "bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8"
	I0729 20:44:38.067931  774167 cri.go:89] found id: "888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104"
	I0729 20:44:38.067936  774167 cri.go:89] found id: "4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1"
	I0729 20:44:38.067941  774167 cri.go:89] found id: "1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77"
	I0729 20:44:38.067950  774167 cri.go:89] found id: ""
	I0729 20:44:38.068012  774167 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.102565307Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286130102533159,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc7d5df0-5768-41e3-8bad-4a3468404330 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.104396343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2ff2e60-e4dc-47ed-850d-6b7ccfe23301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.104464524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2ff2e60-e4dc-47ed-850d-6b7ccfe23301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.104829044Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a2ff2e60-e4dc-47ed-850d-6b7ccfe23301 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.142114994Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b0ac6ccf-380f-4fb0-9207-2e92429622c2 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.142280410Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b0ac6ccf-380f-4fb0-9207-2e92429622c2 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.143620198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9c1f9d45-d4ab-47f7-a8f2-9a5ad8fe3550 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.144010126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286130143989029,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9c1f9d45-d4ab-47f7-a8f2-9a5ad8fe3550 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.144513395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d9089d2-0146-494b-9617-4f438a72c161 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.144584549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d9089d2-0146-494b-9617-4f438a72c161 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.144912865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d9089d2-0146-494b-9617-4f438a72c161 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.187745856Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=613f800c-9ef1-4e06-8244-e0e127ba95e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.187815986Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=613f800c-9ef1-4e06-8244-e0e127ba95e4 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.188925088Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=04a916d9-5d98-46c8-87e4-e6629cf772c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.189543075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286130189350998,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=04a916d9-5d98-46c8-87e4-e6629cf772c6 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.190095948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e1126f4-e46a-4bb6-90b3-925028863f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.190154678Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e1126f4-e46a-4bb6-90b3-925028863f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.190537006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e1126f4-e46a-4bb6-90b3-925028863f66 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.227642541Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=814faf86-4a03-42ad-9451-4d3a41910657 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.227712826Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=814faf86-4a03-42ad-9451-4d3a41910657 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.228752604Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91bcd1f4-5c07-4026-b98d-9b67f59c64e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.229313517Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286130229280696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:143052,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91bcd1f4-5c07-4026-b98d-9b67f59c64e5 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.229789249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d244ae2f-81ac-4427-adab-6abfa17fd1f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.229841237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d244ae2f-81ac-4427-adab-6abfa17fd1f6 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:48:50 multinode-151054 crio[2857]: time="2024-07-29 20:48:50.230298519Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:51a8061550f8bccf115e8b220ba0e7236932887392456a5639c2547979a336b4,PodSandboxId:7416f5fa879889c86cfe91ccd00f0e3b341d8571aa2169f2fcb0c31ce778e64b,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1722285917710455181,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722285890617429299,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb574400709c948e74f17adcb1fb26ad6eaadcf146cdcd77d923bb6222369b9,PodSandboxId:ed02bcecb90b1d19fbbe78a1c1861b5c8c41fb5cfe2709b7a242cfb85a3c2397,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722285884345341404,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0b52b4d
d-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255,PodSandboxId:5dfb8e27131a328d456f9bc91b0cac9d98da6c9bc985387e59ef31e889cf4477,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722285884272984672,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},A
nnotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b,PodSandboxId:f2bd75299a3f6d81050d7afdb9924f01d1834a2521607173f8f9bb0eef272cab,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722285884205114838,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72,PodSandboxId:26c5ca4c1bc2980973ab62350998e38dbc4a9be2d341ea8373bedf42a5e1ac84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_RUNNING,CreatedAt:1722285884174240259,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 71466a39,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15,PodSandboxId:e2b29044c7d930c4c36484999e91e7aca4a656ab43aa6be775e6986814b762d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722285884108160102,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kub
ernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983,PodSandboxId:154dc60e024b5855877d0390afbbf16840e353b9335c918d4c3d7cc68bc96298,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722285884088632816,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:map[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernet
es.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff,PodSandboxId:20b521e6cfba55320d2b030d0264c2ba62bc617f3ccc94bb0e57ceb532fd2b03,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722285884007005249,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a,PodSandboxId:63cc7e479365faad674ae2dc61e1dcf37ecf5cf035f41ee790b3c6d6cf270eea,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722285877820254996,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-b5wh5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b703a9ed-bb2b-4659-a7b3-90b0a410816c,},Annotations:map[string]string{io.kubernetes.container.hash: 58ac801f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a276adeed80c028bb35eb09b2cb443209b068a299ac5694c5d2167332c145bb,PodSandboxId:fc8b196b6ecd4e0283ae3ae01ce19e90baf801b1e8034dc914a6cc1dc4984ebc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1722285561911648549,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-fc5497c4f-xzlcl,i
o.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a183ecda-22ea-4803-8cf4-44a508504fcd,},Annotations:map[string]string{io.kubernetes.container.hash: eaeee35a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13a24620fc650642e35895ef8075b03ee6f69e5936d47695a76046bb755765ca,PodSandboxId:ba883697d286373559b0b0bf93d6c059c27ef3586757046381f298aa3a05fe77,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722285506317278666,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: 0b52b4dd-9625-4ec7-8baf-c41eb5e7c601,},Annotations:map[string]string{io.kubernetes.container.hash: 5b10dfd8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0,PodSandboxId:05b772f39774e53f2d3ffded31ab8bf030242810585805892b5f95248b889ccb,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46,State:CONTAINER_EXITED,CreatedAt:1722285494358871452,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-w47zp,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 3dd562cc-9b76-4ddc-ae67-f2054dd4e8c8,},Annotations:map[string]string{io.kubernetes.container.hash: 71466a39,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0,PodSandboxId:27e1b6698aa587fb3a445623a23f43432086c464d51f9909caeb400338b21951,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722285492070788837,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-r4c4j,io.kubernetes.pod.namespace: kube-system,io.kubernete
s.pod.uid: 96100a20-c36f-43ca-bfd9-973f4081239d,},Annotations:map[string]string{io.kubernetes.container.hash: 4f9367ce,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8,PodSandboxId:27bd748552179126164ade67e5386543dcdc732b1b3ea11cfe1f7e5544345696,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722285471191217385,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-151054,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: fa9ada34162e7d8ab0371909d6b8ded7,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104,PodSandboxId:fa2f571bbc1b9ead40892d5e97bdd9171d30b62484371ddd16bbd53fe198f5ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722285471184707486,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 129b17735802af04f7113930ce58ab7a,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1,PodSandboxId:a7afdd5c40aa8979f340c2615c1d68b292205d3e8db3e4088b2fe903d43194c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722285471129572125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9c
44e1c785e611896129b21f48c919d,},Annotations:map[string]string{io.kubernetes.container.hash: 4c51cca3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77,PodSandboxId:c4121b9f76afd9d331a8a948540b49b135bbaf0b3b17f542a072778fc54257ac,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722285471099350018,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-151054,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf1f886c6c0cc20c949ca6b7a872bc47,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 1ecf60d9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d244ae2f-81ac-4427-adab-6abfa17fd1f6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	51a8061550f8b       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   7416f5fa87988       busybox-fc5497c4f-xzlcl
	ac8c5533285ef       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      3 minutes ago       Running             coredns                   2                   63cc7e479365f       coredns-7db6d8ff4d-b5wh5
	edb574400709c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       1                   ed02bcecb90b1       storage-provisioner
	c54494c8905f1       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      4 minutes ago       Running             kube-proxy                1                   5dfb8e27131a3       kube-proxy-r4c4j
	68e87aba726b4       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      4 minutes ago       Running             kube-scheduler            1                   f2bd75299a3f6       kube-scheduler-multinode-151054
	856ef3eb93f24       6f1d07c71fa0f426df75b802ec3f53ac5c1ae4110f67ab6d9a760083cc1d0f46                                      4 minutes ago       Running             kindnet-cni               1                   26c5ca4c1bc29       kindnet-w47zp
	dff53b546c4c3       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      4 minutes ago       Running             kube-controller-manager   1                   e2b29044c7d93       kube-controller-manager-multinode-151054
	be41b62accf28       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      4 minutes ago       Running             etcd                      1                   154dc60e024b5       etcd-multinode-151054
	77c4392de50bc       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      4 minutes ago       Running             kube-apiserver            1                   20b521e6cfba5       kube-apiserver-multinode-151054
	c8c79ce8f8c6f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      4 minutes ago       Exited              coredns                   1                   63cc7e479365f       coredns-7db6d8ff4d-b5wh5
	3a276adeed80c       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   fc8b196b6ecd4       busybox-fc5497c4f-xzlcl
	13a24620fc650       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 minutes ago      Exited              storage-provisioner       0                   ba883697d2863       storage-provisioner
	ff4b9a92f1149       docker.io/kindest/kindnetd@sha256:6d6071dbf83147a09972dc93b5ff84ff0103fa2231936557113757665f6195b9    10 minutes ago      Exited              kindnet-cni               0                   05b772f39774e       kindnet-w47zp
	8cc1098813fc6       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1                                      10 minutes ago      Exited              kube-proxy                0                   27e1b6698aa58       kube-proxy-r4c4j
	bb8e0a4b6f646       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e                                      10 minutes ago      Exited              kube-controller-manager   0                   27bd748552179       kube-controller-manager-multinode-151054
	888230c2bc7db       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2                                      10 minutes ago      Exited              kube-scheduler            0                   fa2f571bbc1b9       kube-scheduler-multinode-151054
	4f6aa9c58ffc6       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d                                      10 minutes ago      Exited              kube-apiserver            0                   a7afdd5c40aa8       kube-apiserver-multinode-151054
	1e7183d60699a       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899                                      10 minutes ago      Exited              etcd                      0                   c4121b9f76afd       etcd-multinode-151054
	
	
	==> coredns [ac8c5533285ef716732054213308e08584216b6bae4256a2378f6be6f8d9f087] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:44919 - 19462 "HINFO IN 8958517495534879098.2881146296358567092. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010488425s
	
	
	==> coredns [c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51882 - 16352 "HINFO IN 7252056825391412349.2958399299106700554. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015489499s
	
	
	==> describe nodes <==
	Name:               multinode-151054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=multinode-151054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_37_57_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:37:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151054
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:48:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:37:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:44:49 +0000   Mon, 29 Jul 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.229
	  Hostname:    multinode-151054
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 fb998e3d9ebe4fefad43875bf7e965fa
	  System UUID:                fb998e3d-9ebe-4fef-ad43-875bf7e965fa
	  Boot ID:                    cb3d3153-48cd-4261-844f-da4501702e2e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-xzlcl                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m31s
	  kube-system                 coredns-7db6d8ff4d-b5wh5                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-151054                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-w47zp                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-151054             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-151054    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-r4c4j                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-151054             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node multinode-151054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node multinode-151054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node multinode-151054 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m    node-controller  Node multinode-151054 event: Registered Node multinode-151054 in Controller
	  Normal  NodeReady                10m    kubelet          Node multinode-151054 status is now: NodeReady
	  Normal  Starting                 4m1s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m1s   kubelet          Node multinode-151054 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m1s   kubelet          Node multinode-151054 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m1s   kubelet          Node multinode-151054 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m1s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m50s  node-controller  Node multinode-151054 event: Registered Node multinode-151054 in Controller
	
	
	Name:               multinode-151054-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-151054-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=multinode-151054
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_29T20_45_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:45:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-151054-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:46:28 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:47:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:47:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:47:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 29 Jul 2024 20:45:58 +0000   Mon, 29 Jul 2024 20:47:10 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.98
	  Hostname:    multinode-151054-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 820ce78469ec4c72ae42f934153557b2
	  System UUID:                820ce784-69ec-4c72-ae42-f934153557b2
	  Boot ID:                    24a4d6f5-0294-4736-81c0-86585300cbca
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-4hd28    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m28s
	  kube-system                 kindnet-n8znv              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m54s
	  kube-system                 kube-proxy-k7bnr           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m19s                  kube-proxy       
	  Normal  Starting                 9m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m54s (x2 over 9m54s)  kubelet          Node multinode-151054-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m54s (x2 over 9m54s)  kubelet          Node multinode-151054-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m54s (x2 over 9m54s)  kubelet          Node multinode-151054-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m33s                  kubelet          Node multinode-151054-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m23s (x2 over 3m23s)  kubelet          Node multinode-151054-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m23s (x2 over 3m23s)  kubelet          Node multinode-151054-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m23s (x2 over 3m23s)  kubelet          Node multinode-151054-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m5s                   kubelet          Node multinode-151054-m02 status is now: NodeReady
	  Normal  NodeNotReady             100s                   node-controller  Node multinode-151054-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.045090] systemd-fstab-generator[611]: Ignoring "noauto" option for root device
	[  +0.157263] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.133278] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.263049] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +3.970738] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +3.557363] systemd-fstab-generator[940]: Ignoring "noauto" option for root device
	[  +0.068952] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.999813] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.084354] kauditd_printk_skb: 69 callbacks suppressed
	[Jul29 20:38] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.096431] systemd-fstab-generator[1455]: Ignoring "noauto" option for root device
	[  +5.136064] kauditd_printk_skb: 51 callbacks suppressed
	[Jul29 20:39] kauditd_printk_skb: 14 callbacks suppressed
	[Jul29 20:44] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.136240] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.162770] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.138356] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.257825] systemd-fstab-generator[2841]: Ignoring "noauto" option for root device
	[  +1.893675] systemd-fstab-generator[2940]: Ignoring "noauto" option for root device
	[  +6.493850] kauditd_printk_skb: 132 callbacks suppressed
	[  +5.139916] systemd-fstab-generator[3791]: Ignoring "noauto" option for root device
	[  +0.089232] kauditd_printk_skb: 62 callbacks suppressed
	[ +11.512925] kauditd_printk_skb: 19 callbacks suppressed
	[Jul29 20:45] systemd-fstab-generator[3965]: Ignoring "noauto" option for root device
	[ +14.638094] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1e7183d60699a2647987166cb5cd762b512c2aa6a62ef32a3fb000f0df9b9a77] <==
	{"level":"info","ts":"2024-07-29T20:37:51.841475Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T20:37:51.84341Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T20:37:51.844056Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"warn","ts":"2024-07-29T20:38:56.574502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.709089ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7886243852606418830 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151054-m02.17e6c991faa1ff30\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151054-m02.17e6c991faa1ff30\" value_size:640 lease:7886243852606418230 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T20:38:56.574673Z","caller":"traceutil/trace.go:171","msg":"trace[1728078085] linearizableReadLoop","detail":"{readStateIndex:470; appliedIndex:468; }","duration":"127.091709ms","start":"2024-07-29T20:38:56.447558Z","end":"2024-07-29T20:38:56.57465Z","steps":["trace[1728078085] 'read index received'  (duration: 125.382304ms)","trace[1728078085] 'applied index is now lower than readState.Index'  (duration: 1.70866ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T20:38:56.574729Z","caller":"traceutil/trace.go:171","msg":"trace[1297579120] transaction","detail":"{read_only:false; response_revision:449; number_of_response:1; }","duration":"157.708956ms","start":"2024-07-29T20:38:56.417015Z","end":"2024-07-29T20:38:56.574724Z","steps":["trace[1297579120] 'process raft request'  (duration: 157.593007ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:38:56.574777Z","caller":"traceutil/trace.go:171","msg":"trace[1842555160] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"224.621858ms","start":"2024-07-29T20:38:56.350141Z","end":"2024-07-29T20:38:56.574763Z","steps":["trace[1842555160] 'process raft request'  (duration: 24.220271ms)","trace[1842555160] 'compare'  (duration: 199.628265ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T20:38:56.574894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.338735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151054-m02\" ","response":"range_response_count:1 size:1925"}
	{"level":"info","ts":"2024-07-29T20:38:56.574917Z","caller":"traceutil/trace.go:171","msg":"trace[1583237489] range","detail":"{range_begin:/registry/minions/multinode-151054-m02; range_end:; response_count:1; response_revision:449; }","duration":"127.389323ms","start":"2024-07-29T20:38:56.447518Z","end":"2024-07-29T20:38:56.574907Z","steps":["trace[1583237489] 'agreement among raft nodes before linearized reading'  (duration: 127.325583ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T20:39:50.300985Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.229421ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7886243852606419280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/multinode-151054-m03.17e6c99e7be7f2e1\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-151054-m03.17e6c99e7be7f2e1\" value_size:642 lease:7886243852606418839 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-07-29T20:39:50.301288Z","caller":"traceutil/trace.go:171","msg":"trace[1155449345] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:620; }","duration":"173.334101ms","start":"2024-07-29T20:39:50.12793Z","end":"2024-07-29T20:39:50.301264Z","steps":["trace[1155449345] 'read index received'  (duration: 21.729406ms)","trace[1155449345] 'applied index is now lower than readState.Index'  (duration: 151.60393ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T20:39:50.301447Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.507247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-151054-m03\" ","response":"range_response_count:1 size:1926"}
	{"level":"info","ts":"2024-07-29T20:39:50.301494Z","caller":"traceutil/trace.go:171","msg":"trace[213535604] range","detail":"{range_begin:/registry/minions/multinode-151054-m03; range_end:; response_count:1; response_revision:585; }","duration":"173.587629ms","start":"2024-07-29T20:39:50.1279Z","end":"2024-07-29T20:39:50.301488Z","steps":["trace[213535604] 'agreement among raft nodes before linearized reading'  (duration: 173.43879ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:39:50.301532Z","caller":"traceutil/trace.go:171","msg":"trace[176953486] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"180.716704ms","start":"2024-07-29T20:39:50.120809Z","end":"2024-07-29T20:39:50.301525Z","steps":["trace[176953486] 'process raft request'  (duration: 180.407782ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T20:39:50.301483Z","caller":"traceutil/trace.go:171","msg":"trace[636608988] transaction","detail":"{read_only:false; response_revision:584; number_of_response:1; }","duration":"239.990741ms","start":"2024-07-29T20:39:50.061473Z","end":"2024-07-29T20:39:50.301464Z","steps":["trace[636608988] 'process raft request'  (duration: 88.155777ms)","trace[636608988] 'compare'  (duration: 151.115126ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-29T20:43:03.674961Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T20:43:03.675069Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-151054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"]}
	{"level":"warn","ts":"2024-07-29T20:43:03.675182Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.675276Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.709753Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.229:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T20:43:03.709845Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.229:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T20:43:03.709938Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b8647f2870156d71","current-leader-member-id":"b8647f2870156d71"}
	{"level":"info","ts":"2024-07-29T20:43:03.713281Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:43:03.71348Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:43:03.713526Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-151054","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"]}
	
	
	==> etcd [be41b62accf2841e0bf2a352b8c68f862e471bcd07645f4572d107d85ea1b983] <==
	{"level":"info","ts":"2024-07-29T20:44:44.878291Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T20:44:44.8783Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T20:44:44.878808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 switched to configuration voters=(13286884612305677681)"}
	{"level":"info","ts":"2024-07-29T20:44:44.878915Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","added-peer-id":"b8647f2870156d71","added-peer-peer-urls":["https://192.168.39.229:2380"]}
	{"level":"info","ts":"2024-07-29T20:44:44.890141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bfbf13ce68722b","local-member-id":"b8647f2870156d71","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:44:44.89341Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:44:44.924571Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T20:44:44.931645Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b8647f2870156d71","initial-advertise-peer-urls":["https://192.168.39.229:2380"],"listen-peer-urls":["https://192.168.39.229:2380"],"advertise-client-urls":["https://192.168.39.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T20:44:44.934461Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T20:44:44.928439Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:44:44.938465Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.229:2380"}
	{"level":"info","ts":"2024-07-29T20:44:45.976456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976575Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgPreVoteResp from b8647f2870156d71 at term 2"}
	{"level":"info","ts":"2024-07-29T20:44:45.976662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 received MsgVoteResp from b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8647f2870156d71 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.976741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8647f2870156d71 elected leader b8647f2870156d71 at term 3"}
	{"level":"info","ts":"2024-07-29T20:44:45.979452Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b8647f2870156d71","local-member-attributes":"{Name:multinode-151054 ClientURLs:[https://192.168.39.229:2379]}","request-path":"/0/members/b8647f2870156d71/attributes","cluster-id":"2bfbf13ce68722b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T20:44:45.979535Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:44:45.979878Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T20:44:45.979915Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T20:44:45.980079Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:44:45.982246Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.229:2379"}
	{"level":"info","ts":"2024-07-29T20:44:45.9832Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:48:50 up 11 min,  0 users,  load average: 0.81, 0.44, 0.22
	Linux multinode-151054 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [856ef3eb93f24fd07576ab7206d2085805662bcebb64556667b2b01e500ddb72] <==
	I0729 20:47:45.108744       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:47:55.111145       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:47:55.111195       1 main.go:299] handling current node
	I0729 20:47:55.111220       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:47:55.111229       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:05.117472       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:48:05.117585       1 main.go:299] handling current node
	I0729 20:48:05.117616       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:48:05.117635       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:15.116108       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:48:15.116164       1 main.go:299] handling current node
	I0729 20:48:15.116182       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:48:15.116188       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:25.114192       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:48:25.114277       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:25.114530       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:48:25.114555       1 main.go:299] handling current node
	I0729 20:48:35.115536       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:48:35.115678       1 main.go:299] handling current node
	I0729 20:48:35.115713       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:48:35.115733       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:45.109031       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:48:45.109079       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:48:45.109272       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:48:45.109290       1 main.go:299] handling current node
	
	
	==> kindnet [ff4b9a92f1149a48f7b46e24b20bbcf29fa26de244fb0e40227cb81df381afb0] <==
	I0729 20:42:15.367541       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:25.367125       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:25.367267       1 main.go:299] handling current node
	I0729 20:42:25.367403       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:25.367442       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:25.367595       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:25.367621       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:35.375290       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:35.375344       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:35.375607       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:35.375654       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:35.375758       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:35.375785       1 main.go:299] handling current node
	I0729 20:42:45.367449       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:45.367489       1 main.go:299] handling current node
	I0729 20:42:45.367508       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:45.367514       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:45.367664       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:45.367682       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	I0729 20:42:55.375096       1 main.go:295] Handling node with IPs: map[192.168.39.229:{}]
	I0729 20:42:55.375301       1 main.go:299] handling current node
	I0729 20:42:55.375355       1 main.go:295] Handling node with IPs: map[192.168.39.98:{}]
	I0729 20:42:55.375468       1 main.go:322] Node multinode-151054-m02 has CIDR [10.244.1.0/24] 
	I0729 20:42:55.375670       1 main.go:295] Handling node with IPs: map[192.168.39.252:{}]
	I0729 20:42:55.375707       1 main.go:322] Node multinode-151054-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4f6aa9c58ffc6a2c3ab0e140d828a572520317bde487b2bd3786507089e9c7a1] <==
	W0729 20:43:03.705041       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705088       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705136       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705190       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705242       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705290       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705338       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705477       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705560       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705592       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.705625       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706289       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706355       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706484       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706541       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706584       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706635       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706683       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706731       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706780       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706871       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.706950       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707093       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707143       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0729 20:43:03.707181       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [77c4392de50bc07017a521e440ba7d279a532b6fd1f4cc13180077b462921dff] <==
	I0729 20:44:47.239724       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 20:44:47.239763       1 policy_source.go:224] refreshing policies
	I0729 20:44:47.257518       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 20:44:47.257630       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 20:44:47.258701       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 20:44:47.260027       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0729 20:44:47.260081       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0729 20:44:47.260546       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 20:44:47.258706       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 20:44:47.266945       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0729 20:44:47.268250       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0729 20:44:47.268440       1 aggregator.go:165] initial CRD sync complete...
	I0729 20:44:47.268549       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 20:44:47.268640       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 20:44:47.268665       1 cache.go:39] Caches are synced for autoregister controller
	E0729 20:44:47.277912       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0729 20:44:47.323017       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 20:44:48.161137       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 20:44:49.851235       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 20:44:49.967118       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 20:44:49.979045       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 20:44:50.045469       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 20:44:50.052989       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 20:45:00.537769       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 20:45:00.637072       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bb8e0a4b6f646dee0d17ae90ebaae5980a9278c344a3cbf71ef029fbab9a09e8] <==
	I0729 20:38:56.614343       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m02" podCIDRs=["10.244.1.0/24"]
	I0729 20:38:59.125669       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151054-m02"
	I0729 20:39:17.015873       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:39:19.164307       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.765076ms"
	I0729 20:39:19.181803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.427586ms"
	I0729 20:39:19.181970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="119.376µs"
	I0729 20:39:19.182093       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.526µs"
	I0729 20:39:19.184273       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.285µs"
	I0729 20:39:22.764815       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.933623ms"
	I0729 20:39:22.765278       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.418µs"
	I0729 20:39:22.807838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.888226ms"
	I0729 20:39:22.808045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.79µs"
	I0729 20:39:50.305267       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:39:50.307465       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:39:50.340955       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:39:54.156804       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-151054-m03"
	I0729 20:40:09.735448       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:37.854567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:38.917537       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:40:38.918519       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:40:38.936837       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.3.0/24"]
	I0729 20:40:58.217720       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:41:39.211790       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m03"
	I0729 20:41:39.234455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.318587ms"
	I0729 20:41:39.234584       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.688µs"
	
	
	==> kube-controller-manager [dff53b546c4c3702886138d0b4711c165dbba5bdd1118c3abb7e2603b8ddac15] <==
	I0729 20:45:26.161669       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="47.008µs"
	I0729 20:45:27.294559       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m02\" does not exist"
	I0729 20:45:27.307748       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m02" podCIDRs=["10.244.1.0/24"]
	I0729 20:45:28.240793       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="69.991µs"
	I0729 20:45:28.252613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.56µs"
	I0729 20:45:28.255827       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.453µs"
	I0729 20:45:28.263452       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="86.139µs"
	I0729 20:45:28.266938       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="56.196µs"
	I0729 20:45:45.641132       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:45:45.662715       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.06µs"
	I0729 20:45:45.676069       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.886µs"
	I0729 20:45:49.499871       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.698808ms"
	I0729 20:45:49.500334       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.357µs"
	I0729 20:46:03.521364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:46:04.531306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:46:04.531598       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-151054-m03\" does not exist"
	I0729 20:46:04.544939       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-151054-m03" podCIDRs=["10.244.2.0/24"]
	I0729 20:46:23.820430       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:46:29.043604       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-151054-m02"
	I0729 20:47:10.436970       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.632295ms"
	I0729 20:47:10.437725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.331µs"
	I0729 20:47:20.328596       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bhsjj"
	I0729 20:47:20.352845       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-bhsjj"
	I0729 20:47:20.352881       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dj5sl"
	I0729 20:47:20.372182       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-dj5sl"
	
	
	==> kube-proxy [8cc1098813fc68af4447889f8e1a0ab2502ac50e35de420a3715744f54a9a2d0] <==
	I0729 20:38:12.205950       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:38:12.217057       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	I0729 20:38:12.248523       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:38:12.248573       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:38:12.248589       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:38:12.250875       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:38:12.251085       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:38:12.251109       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:38:12.252220       1 config.go:192] "Starting service config controller"
	I0729 20:38:12.252251       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:38:12.252316       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:38:12.252321       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:38:12.252904       1 config.go:319] "Starting node config controller"
	I0729 20:38:12.252926       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:38:12.353082       1 shared_informer.go:320] Caches are synced for node config
	I0729 20:38:12.353126       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:38:12.353165       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c54494c8905f16425e071d55bd738b79d3173f93e384b59b7e84fede2096c255] <==
	I0729 20:44:45.656551       1 server_linux.go:69] "Using iptables proxy"
	I0729 20:44:47.275205       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.229"]
	I0729 20:44:47.346503       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 20:44:47.346605       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 20:44:47.346635       1 server_linux.go:165] "Using iptables Proxier"
	I0729 20:44:47.348885       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 20:44:47.349087       1 server.go:872] "Version info" version="v1.30.3"
	I0729 20:44:47.349117       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:44:47.350715       1 config.go:192] "Starting service config controller"
	I0729 20:44:47.350772       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 20:44:47.350816       1 config.go:101] "Starting endpoint slice config controller"
	I0729 20:44:47.350832       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 20:44:47.352263       1 config.go:319] "Starting node config controller"
	I0729 20:44:47.353024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 20:44:47.451053       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 20:44:47.451114       1 shared_informer.go:320] Caches are synced for service config
	I0729 20:44:47.453279       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [68e87aba726b426aa6ce249d71d77623d5a91dd30ca5292310cb0e5220f80c5b] <==
	I0729 20:44:45.431738       1 serving.go:380] Generated self-signed cert in-memory
	W0729 20:44:47.255205       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 20:44:47.255280       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:44:47.255290       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 20:44:47.255296       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 20:44:47.273187       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 20:44:47.273267       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:44:47.277109       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 20:44:47.277139       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 20:44:47.280602       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 20:44:47.280669       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 20:44:47.378630       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [888230c2bc7db0c13a92b39530084837b21ca72108fe4af8328b412e66c2b104] <==
	E0729 20:37:53.926225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0729 20:37:53.925809       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 20:37:53.926277       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0729 20:37:53.925848       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:37:53.926301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:37:53.926085       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 20:37:53.926313       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 20:37:53.925131       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 20:37:53.926354       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0729 20:37:53.926567       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 20:37:53.926599       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0729 20:37:54.794255       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 20:37:54.794297       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0729 20:37:54.900722       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 20:37:54.900963       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:37:55.061253       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 20:37:55.061294       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0729 20:37:55.131159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0729 20:37:55.131369       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0729 20:37:55.154237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0729 20:37:55.154640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0729 20:37:55.162041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 20:37:55.162113       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0729 20:37:56.718893       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 20:43:03.679293       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.554727    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-151054\" already exists" pod="kube-system/kube-apiserver-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.555316    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-151054\" already exists" pod="kube-system/kube-controller-manager-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: E0729 20:44:50.556160    3798 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"etcd-multinode-151054\" already exists" pod="kube-system/etcd-multinode-151054"
	Jul 29 20:44:50 multinode-151054 kubelet[3798]: I0729 20:44:50.597247    3798 scope.go:117] "RemoveContainer" containerID="c8c79ce8f8c6fc94e8c731f1c1c596ee577523bc7ad9e98845ef407acc91511a"
	Jul 29 20:44:53 multinode-151054 kubelet[3798]: I0729 20:44:53.333122    3798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Jul 29 20:45:49 multinode-151054 kubelet[3798]: E0729 20:45:49.458012    3798 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:45:49 multinode-151054 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:46:49 multinode-151054 kubelet[3798]: E0729 20:46:49.457348    3798 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:46:49 multinode-151054 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:46:49 multinode-151054 kubelet[3798]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:46:49 multinode-151054 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:46:49 multinode-151054 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:47:49 multinode-151054 kubelet[3798]: E0729 20:47:49.457463    3798 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:47:49 multinode-151054 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:47:49 multinode-151054 kubelet[3798]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:47:49 multinode-151054 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:47:49 multinode-151054 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 29 20:48:49 multinode-151054 kubelet[3798]: E0729 20:48:49.457600    3798 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 29 20:48:49 multinode-151054 kubelet[3798]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 29 20:48:49 multinode-151054 kubelet[3798]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 29 20:48:49 multinode-151054 kubelet[3798]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 29 20:48:49 multinode-151054 kubelet[3798]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 20:48:49.846222  776126 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/19344-733808/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-151054 -n multinode-151054
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-151054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.37s)

                                                
                                    
x
+
TestPreload (242.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-596687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0729 20:53:14.091151  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-596687 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m43.5511012s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-596687 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-596687 image pull gcr.io/k8s-minikube/busybox: (2.88529976s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-596687
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-596687: (6.537802201s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-596687 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-596687 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.465650406s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-596687 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-07-29 20:56:33.779297682 +0000 UTC m=+5597.948004867
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-596687 -n test-preload-596687
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-596687 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-596687 logs -n 25: (1.050914581s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054 sudo cat                                       | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt                       | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m02:/home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n                                                                 | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | multinode-151054-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-151054 ssh -n multinode-151054-m02 sudo cat                                   | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	|         | /home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-151054 node stop m03                                                          | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:40 UTC |
	| node    | multinode-151054 node start                                                             | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:40 UTC | 29 Jul 24 20:41 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| stop    | -p multinode-151054                                                                     | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:41 UTC |                     |
	| start   | -p multinode-151054                                                                     | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:43 UTC | 29 Jul 24 20:46 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC |                     |
	| node    | multinode-151054 node delete                                                            | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC | 29 Jul 24 20:46 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-151054 stop                                                                   | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:46 UTC |                     |
	| start   | -p multinode-151054                                                                     | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:48 UTC | 29 Jul 24 20:51 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-151054                                                                | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:51 UTC |                     |
	| start   | -p multinode-151054-m02                                                                 | multinode-151054-m02 | jenkins | v1.33.1 | 29 Jul 24 20:51 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-151054-m03                                                                 | multinode-151054-m03 | jenkins | v1.33.1 | 29 Jul 24 20:51 UTC | 29 Jul 24 20:52 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-151054                                                                 | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:52 UTC |                     |
	| delete  | -p multinode-151054-m03                                                                 | multinode-151054-m03 | jenkins | v1.33.1 | 29 Jul 24 20:52 UTC | 29 Jul 24 20:52 UTC |
	| delete  | -p multinode-151054                                                                     | multinode-151054     | jenkins | v1.33.1 | 29 Jul 24 20:52 UTC | 29 Jul 24 20:52 UTC |
	| start   | -p test-preload-596687                                                                  | test-preload-596687  | jenkins | v1.33.1 | 29 Jul 24 20:52 UTC | 29 Jul 24 20:55 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-596687 image pull                                                          | test-preload-596687  | jenkins | v1.33.1 | 29 Jul 24 20:55 UTC | 29 Jul 24 20:55 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-596687                                                                  | test-preload-596687  | jenkins | v1.33.1 | 29 Jul 24 20:55 UTC | 29 Jul 24 20:55 UTC |
	| start   | -p test-preload-596687                                                                  | test-preload-596687  | jenkins | v1.33.1 | 29 Jul 24 20:55 UTC | 29 Jul 24 20:56 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-596687 image list                                                          | test-preload-596687  | jenkins | v1.33.1 | 29 Jul 24 20:56 UTC | 29 Jul 24 20:56 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 20:55:27
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 20:55:27.132375  778744 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:55:27.132651  778744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:55:27.132661  778744 out.go:304] Setting ErrFile to fd 2...
	I0729 20:55:27.132667  778744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:55:27.132843  778744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:55:27.133425  778744 out.go:298] Setting JSON to false
	I0729 20:55:27.134380  778744 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":16674,"bootTime":1722269853,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:55:27.134442  778744 start.go:139] virtualization: kvm guest
	I0729 20:55:27.136742  778744 out.go:177] * [test-preload-596687] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:55:27.138270  778744 notify.go:220] Checking for updates...
	I0729 20:55:27.138285  778744 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:55:27.139654  778744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:55:27.141049  778744 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:55:27.142451  778744 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:55:27.143691  778744 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:55:27.145108  778744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:55:27.146812  778744 config.go:182] Loaded profile config "test-preload-596687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 20:55:27.147193  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:55:27.147251  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:55:27.162309  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35753
	I0729 20:55:27.162775  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:55:27.163260  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:55:27.163281  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:55:27.163643  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:55:27.163854  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:27.165728  778744 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 20:55:27.167140  778744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:55:27.167472  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:55:27.167512  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:55:27.182014  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41635
	I0729 20:55:27.182448  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:55:27.182918  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:55:27.182940  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:55:27.183261  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:55:27.183513  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:27.217704  778744 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:55:27.218801  778744 start.go:297] selected driver: kvm2
	I0729 20:55:27.218815  778744 start.go:901] validating driver "kvm2" against &{Name:test-preload-596687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-596687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:55:27.218927  778744 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:55:27.219605  778744 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:55:27.219668  778744 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:55:27.234828  778744 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:55:27.235157  778744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:55:27.235219  778744 cni.go:84] Creating CNI manager for ""
	I0729 20:55:27.235232  778744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:55:27.235284  778744 start.go:340] cluster config:
	{Name:test-preload-596687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-596687 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:55:27.235383  778744 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:55:27.237105  778744 out.go:177] * Starting "test-preload-596687" primary control-plane node in "test-preload-596687" cluster
	I0729 20:55:27.238429  778744 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 20:55:27.356617  778744 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 20:55:27.356655  778744 cache.go:56] Caching tarball of preloaded images
	I0729 20:55:27.356829  778744 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 20:55:27.358808  778744 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0729 20:55:27.360159  778744 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 20:55:27.457890  778744 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0729 20:55:38.862940  778744 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 20:55:38.863053  778744 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0729 20:55:39.729068  778744 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0729 20:55:39.729217  778744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/config.json ...
	I0729 20:55:39.729464  778744 start.go:360] acquireMachinesLock for test-preload-596687: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:55:39.729530  778744 start.go:364] duration metric: took 43.174µs to acquireMachinesLock for "test-preload-596687"
	I0729 20:55:39.729547  778744 start.go:96] Skipping create...Using existing machine configuration
	I0729 20:55:39.729553  778744 fix.go:54] fixHost starting: 
	I0729 20:55:39.729885  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:55:39.729918  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:55:39.744878  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39765
	I0729 20:55:39.745436  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:55:39.745961  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:55:39.745990  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:55:39.746412  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:55:39.746627  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:39.746772  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetState
	I0729 20:55:39.748388  778744 fix.go:112] recreateIfNeeded on test-preload-596687: state=Stopped err=<nil>
	I0729 20:55:39.748416  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	W0729 20:55:39.748580  778744 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 20:55:39.751491  778744 out.go:177] * Restarting existing kvm2 VM for "test-preload-596687" ...
	I0729 20:55:39.753012  778744 main.go:141] libmachine: (test-preload-596687) Calling .Start
	I0729 20:55:39.753204  778744 main.go:141] libmachine: (test-preload-596687) Ensuring networks are active...
	I0729 20:55:39.754002  778744 main.go:141] libmachine: (test-preload-596687) Ensuring network default is active
	I0729 20:55:39.754346  778744 main.go:141] libmachine: (test-preload-596687) Ensuring network mk-test-preload-596687 is active
	I0729 20:55:39.754708  778744 main.go:141] libmachine: (test-preload-596687) Getting domain xml...
	I0729 20:55:39.755433  778744 main.go:141] libmachine: (test-preload-596687) Creating domain...
	I0729 20:55:40.967222  778744 main.go:141] libmachine: (test-preload-596687) Waiting to get IP...
	I0729 20:55:40.968173  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:40.968613  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:40.968674  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:40.968589  778828 retry.go:31] will retry after 257.458229ms: waiting for machine to come up
	I0729 20:55:41.228401  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:41.228812  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:41.228839  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:41.228758  778828 retry.go:31] will retry after 332.257363ms: waiting for machine to come up
	I0729 20:55:41.562421  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:41.562821  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:41.562851  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:41.562765  778828 retry.go:31] will retry after 364.922226ms: waiting for machine to come up
	I0729 20:55:41.929446  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:41.929819  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:41.929853  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:41.929765  778828 retry.go:31] will retry after 539.711302ms: waiting for machine to come up
	I0729 20:55:42.471544  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:42.472070  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:42.472100  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:42.472011  778828 retry.go:31] will retry after 499.553221ms: waiting for machine to come up
	I0729 20:55:42.972691  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:42.973089  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:42.973231  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:42.973068  778828 retry.go:31] will retry after 650.754084ms: waiting for machine to come up
	I0729 20:55:43.625138  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:43.625586  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:43.625609  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:43.625534  778828 retry.go:31] will retry after 1.002549989s: waiting for machine to come up
	I0729 20:55:44.629825  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:44.630296  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:44.630365  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:44.630236  778828 retry.go:31] will retry after 1.218988285s: waiting for machine to come up
	I0729 20:55:45.851175  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:45.851528  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:45.851682  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:45.851494  778828 retry.go:31] will retry after 1.638341659s: waiting for machine to come up
	I0729 20:55:47.492255  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:47.492720  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:47.492751  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:47.492660  778828 retry.go:31] will retry after 2.109041718s: waiting for machine to come up
	I0729 20:55:49.605239  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:49.605693  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:49.605720  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:49.605647  778828 retry.go:31] will retry after 2.842206666s: waiting for machine to come up
	I0729 20:55:52.451341  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:52.451935  778744 main.go:141] libmachine: (test-preload-596687) DBG | unable to find current IP address of domain test-preload-596687 in network mk-test-preload-596687
	I0729 20:55:52.451962  778744 main.go:141] libmachine: (test-preload-596687) DBG | I0729 20:55:52.451890  778828 retry.go:31] will retry after 2.793822212s: waiting for machine to come up
	I0729 20:55:55.247543  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.247956  778744 main.go:141] libmachine: (test-preload-596687) Found IP for machine: 192.168.39.110
	I0729 20:55:55.247989  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has current primary IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.247997  778744 main.go:141] libmachine: (test-preload-596687) Reserving static IP address...
	I0729 20:55:55.248643  778744 main.go:141] libmachine: (test-preload-596687) Reserved static IP address: 192.168.39.110
	I0729 20:55:55.248678  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "test-preload-596687", mac: "52:54:00:80:81:73", ip: "192.168.39.110"} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.248708  778744 main.go:141] libmachine: (test-preload-596687) Waiting for SSH to be available...
	I0729 20:55:55.248735  778744 main.go:141] libmachine: (test-preload-596687) DBG | skip adding static IP to network mk-test-preload-596687 - found existing host DHCP lease matching {name: "test-preload-596687", mac: "52:54:00:80:81:73", ip: "192.168.39.110"}
	I0729 20:55:55.248752  778744 main.go:141] libmachine: (test-preload-596687) DBG | Getting to WaitForSSH function...
	I0729 20:55:55.250814  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.251073  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.251100  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.251196  778744 main.go:141] libmachine: (test-preload-596687) DBG | Using SSH client type: external
	I0729 20:55:55.251212  778744 main.go:141] libmachine: (test-preload-596687) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa (-rw-------)
	I0729 20:55:55.251244  778744 main.go:141] libmachine: (test-preload-596687) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.110 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:55:55.251257  778744 main.go:141] libmachine: (test-preload-596687) DBG | About to run SSH command:
	I0729 20:55:55.251272  778744 main.go:141] libmachine: (test-preload-596687) DBG | exit 0
	I0729 20:55:55.376014  778744 main.go:141] libmachine: (test-preload-596687) DBG | SSH cmd err, output: <nil>: 
	I0729 20:55:55.376521  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetConfigRaw
	I0729 20:55:55.377163  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetIP
	I0729 20:55:55.380060  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.380449  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.380495  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.380687  778744 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/config.json ...
	I0729 20:55:55.380880  778744 machine.go:94] provisionDockerMachine start ...
	I0729 20:55:55.380899  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:55.381123  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:55.383526  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.383836  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.383869  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.383976  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:55.384166  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.384372  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.384581  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:55.384783  778744 main.go:141] libmachine: Using SSH client type: native
	I0729 20:55:55.385039  778744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0729 20:55:55.385055  778744 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 20:55:55.488056  778744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0729 20:55:55.488100  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetMachineName
	I0729 20:55:55.488397  778744 buildroot.go:166] provisioning hostname "test-preload-596687"
	I0729 20:55:55.488425  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetMachineName
	I0729 20:55:55.488651  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:55.491218  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.491554  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.491589  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.491679  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:55.491872  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.492014  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.492173  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:55.492300  778744 main.go:141] libmachine: Using SSH client type: native
	I0729 20:55:55.492471  778744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0729 20:55:55.492483  778744 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-596687 && echo "test-preload-596687" | sudo tee /etc/hostname
	I0729 20:55:55.611462  778744 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-596687
	
	I0729 20:55:55.611498  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:55.614249  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.614528  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.614556  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.614753  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:55.614945  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.615087  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.615242  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:55.615456  778744 main.go:141] libmachine: Using SSH client type: native
	I0729 20:55:55.615645  778744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0729 20:55:55.615668  778744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-596687' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-596687/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-596687' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:55:55.728237  778744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:55:55.728279  778744 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:55:55.728334  778744 buildroot.go:174] setting up certificates
	I0729 20:55:55.728346  778744 provision.go:84] configureAuth start
	I0729 20:55:55.728358  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetMachineName
	I0729 20:55:55.728647  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetIP
	I0729 20:55:55.731297  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.731656  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.731677  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.731863  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:55.734109  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.734459  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.734490  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.734606  778744 provision.go:143] copyHostCerts
	I0729 20:55:55.734673  778744 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:55:55.734684  778744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:55:55.734765  778744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:55:55.734921  778744 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:55:55.734936  778744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:55:55.734979  778744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:55:55.735073  778744 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:55:55.735083  778744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:55:55.735119  778744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:55:55.735217  778744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.test-preload-596687 san=[127.0.0.1 192.168.39.110 localhost minikube test-preload-596687]
	I0729 20:55:55.892483  778744 provision.go:177] copyRemoteCerts
	I0729 20:55:55.892559  778744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:55:55.892601  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:55.895135  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.895433  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:55.895482  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:55.895609  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:55.895813  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:55.896072  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:55.896263  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:55:55.977336  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0729 20:55:55.999860  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:55:56.024302  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:55:56.046624  778744 provision.go:87] duration metric: took 318.264172ms to configureAuth
	I0729 20:55:56.046650  778744 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:55:56.046835  778744 config.go:182] Loaded profile config "test-preload-596687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 20:55:56.046927  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:56.049426  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.049774  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.049800  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.049996  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:56.050339  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.050500  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.050642  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:56.050800  778744 main.go:141] libmachine: Using SSH client type: native
	I0729 20:55:56.050980  778744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0729 20:55:56.050995  778744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:55:56.306550  778744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:55:56.306584  778744 machine.go:97] duration metric: took 925.690252ms to provisionDockerMachine
	I0729 20:55:56.306596  778744 start.go:293] postStartSetup for "test-preload-596687" (driver="kvm2")
	I0729 20:55:56.306606  778744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:55:56.306624  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:56.306988  778744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:55:56.307026  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:56.309702  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.310073  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.310126  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.310250  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:56.310443  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.310605  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:56.310728  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:55:56.390366  778744 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:55:56.394630  778744 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:55:56.394655  778744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:55:56.394748  778744 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:55:56.394836  778744 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:55:56.394951  778744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:55:56.403885  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:55:56.425816  778744 start.go:296] duration metric: took 119.203152ms for postStartSetup
	I0729 20:55:56.425864  778744 fix.go:56] duration metric: took 16.69631028s for fixHost
	I0729 20:55:56.425892  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:56.428620  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.429111  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.429144  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.429418  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:56.429668  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.429889  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.430102  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:56.430294  778744 main.go:141] libmachine: Using SSH client type: native
	I0729 20:55:56.430476  778744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.39.110 22 <nil> <nil>}
	I0729 20:55:56.430487  778744 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0729 20:55:56.532480  778744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722286556.507117934
	
	I0729 20:55:56.532504  778744 fix.go:216] guest clock: 1722286556.507117934
	I0729 20:55:56.532511  778744 fix.go:229] Guest: 2024-07-29 20:55:56.507117934 +0000 UTC Remote: 2024-07-29 20:55:56.42586892 +0000 UTC m=+29.328357269 (delta=81.249014ms)
	I0729 20:55:56.532563  778744 fix.go:200] guest clock delta is within tolerance: 81.249014ms
	I0729 20:55:56.532570  778744 start.go:83] releasing machines lock for "test-preload-596687", held for 16.803030079s
	I0729 20:55:56.532588  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:56.532857  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetIP
	I0729 20:55:56.535605  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.536073  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.536104  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.536238  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:56.536917  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:56.537110  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:55:56.537219  778744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:55:56.537267  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:56.537349  778744 ssh_runner.go:195] Run: cat /version.json
	I0729 20:55:56.537368  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:55:56.539979  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.540205  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.540336  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.540363  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.540513  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:56.540629  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:56.540656  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:56.540659  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.540802  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:55:56.540854  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:56.540974  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:55:56.541069  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:55:56.541124  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:55:56.541259  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:55:56.647192  778744 ssh_runner.go:195] Run: systemctl --version
	I0729 20:55:56.652792  778744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:55:56.794418  778744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:55:56.802102  778744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:55:56.802170  778744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:55:56.816803  778744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:55:56.816826  778744 start.go:495] detecting cgroup driver to use...
	I0729 20:55:56.816888  778744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:55:56.830976  778744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:55:56.843650  778744 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:55:56.843717  778744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:55:56.855800  778744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:55:56.867808  778744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:55:56.973372  778744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:55:57.108075  778744 docker.go:232] disabling docker service ...
	I0729 20:55:57.108170  778744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:55:57.121566  778744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:55:57.133506  778744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:55:57.259042  778744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:55:57.386544  778744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:55:57.404107  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:55:57.420736  778744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0729 20:55:57.420813  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.430409  778744 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:55:57.430474  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.440215  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.449955  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.459229  778744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:55:57.468804  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.478149  778744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.493470  778744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:55:57.502808  778744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:55:57.511329  778744 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:55:57.511379  778744 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:55:57.523309  778744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:55:57.531855  778744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:55:57.648775  778744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:55:57.770771  778744 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:55:57.770844  778744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:55:57.775414  778744 start.go:563] Will wait 60s for crictl version
	I0729 20:55:57.775468  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:55:57.778889  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:55:57.814793  778744 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:55:57.814901  778744 ssh_runner.go:195] Run: crio --version
	I0729 20:55:57.843617  778744 ssh_runner.go:195] Run: crio --version
	I0729 20:55:57.871242  778744 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0729 20:55:57.872639  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetIP
	I0729 20:55:57.875312  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:57.875624  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:55:57.875655  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:55:57.875875  778744 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0729 20:55:57.879745  778744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:55:57.891104  778744 kubeadm.go:883] updating cluster {Name:test-preload-596687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-596687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:55:57.891215  778744 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0729 20:55:57.891257  778744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:55:57.924290  778744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 20:55:57.924364  778744 ssh_runner.go:195] Run: which lz4
	I0729 20:55:57.928391  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0729 20:55:57.932228  778744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 20:55:57.932257  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0729 20:55:59.253037  778744 crio.go:462] duration metric: took 1.324685104s to copy over tarball
	I0729 20:55:59.253120  778744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 20:56:01.557154  778744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.303997662s)
	I0729 20:56:01.557245  778744 crio.go:469] duration metric: took 2.304128121s to extract the tarball
	I0729 20:56:01.557256  778744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 20:56:01.597263  778744 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:56:01.638043  778744 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0729 20:56:01.638072  778744 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 20:56:01.638162  778744 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:56:01.638166  778744 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 20:56:01.638239  778744 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 20:56:01.638262  778744 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0729 20:56:01.638272  778744 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0729 20:56:01.638206  778744 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 20:56:01.638168  778744 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 20:56:01.638236  778744 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 20:56:01.639973  778744 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 20:56:01.639985  778744 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0729 20:56:01.640005  778744 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:56:01.640004  778744 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0729 20:56:01.639973  778744 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 20:56:01.640078  778744 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 20:56:01.639979  778744 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 20:56:01.640402  778744 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 20:56:01.877404  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0729 20:56:01.897314  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 20:56:01.898011  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0729 20:56:01.901090  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0729 20:56:01.911411  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0729 20:56:01.936417  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0729 20:56:01.942014  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0729 20:56:01.954710  778744 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0729 20:56:01.954751  778744 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0729 20:56:01.954800  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.010257  778744 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0729 20:56:02.010303  778744 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 20:56:02.010358  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.027141  778744 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0729 20:56:02.027195  778744 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0729 20:56:02.027250  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.027141  778744 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0729 20:56:02.027313  778744 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0729 20:56:02.027497  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.049780  778744 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0729 20:56:02.049820  778744 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0729 20:56:02.049888  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.077372  778744 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0729 20:56:02.077420  778744 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0729 20:56:02.077440  778744 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0729 20:56:02.077469  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.077477  778744 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0729 20:56:02.077513  778744 ssh_runner.go:195] Run: which crictl
	I0729 20:56:02.077515  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0729 20:56:02.077549  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0729 20:56:02.077594  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0729 20:56:02.077632  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0729 20:56:02.077672  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0729 20:56:02.097331  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0729 20:56:02.187911  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0729 20:56:02.187965  778744 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0729 20:56:02.187978  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0729 20:56:02.188025  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 20:56:02.188054  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0729 20:56:02.198396  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0729 20:56:02.198514  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0729 20:56:02.207564  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0729 20:56:02.207618  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0729 20:56:02.207662  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 20:56:02.207707  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 20:56:02.207668  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0729 20:56:02.207866  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0729 20:56:02.240335  778744 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0729 20:56:02.240389  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0729 20:56:02.240404  778744 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0729 20:56:02.240430  778744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 20:56:02.240441  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0729 20:56:02.240482  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0729 20:56:02.240504  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0729 20:56:02.240553  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0729 20:56:02.240566  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0729 20:56:02.240611  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0729 20:56:02.245776  778744 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0729 20:56:02.479552  778744 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:56:05.093355  778744 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.852884469s)
	I0729 20:56:05.093403  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0729 20:56:05.093433  778744 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 20:56:05.093461  778744 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.61387327s)
	I0729 20:56:05.093490  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0729 20:56:05.739582  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0729 20:56:05.739633  778744 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I0729 20:56:05.739696  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0729 20:56:05.878540  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0729 20:56:05.878597  778744 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 20:56:05.878645  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0729 20:56:06.616162  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0729 20:56:06.616219  778744 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 20:56:06.616331  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0729 20:56:07.057179  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0729 20:56:07.057238  778744 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0729 20:56:07.057332  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0729 20:56:09.201822  778744 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.144453619s)
	I0729 20:56:09.201868  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0729 20:56:09.201908  778744 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 20:56:09.202043  778744 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0729 20:56:10.044772  778744 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0729 20:56:10.044825  778744 cache_images.go:123] Successfully loaded all cached images
	I0729 20:56:10.044832  778744 cache_images.go:92] duration metric: took 8.40674587s to LoadCachedImages
	I0729 20:56:10.044849  778744 kubeadm.go:934] updating node { 192.168.39.110 8443 v1.24.4 crio true true} ...
	I0729 20:56:10.045011  778744 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-596687 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-596687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:56:10.045114  778744 ssh_runner.go:195] Run: crio config
	I0729 20:56:10.093199  778744 cni.go:84] Creating CNI manager for ""
	I0729 20:56:10.093248  778744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:56:10.093264  778744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:56:10.093309  778744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.110 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-596687 NodeName:test-preload-596687 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.110"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.110 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 20:56:10.093488  778744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.110
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-596687"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.110
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.110"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:56:10.093588  778744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0729 20:56:10.106761  778744 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:56:10.106830  778744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 20:56:10.115897  778744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0729 20:56:10.130863  778744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:56:10.145751  778744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0729 20:56:10.161326  778744 ssh_runner.go:195] Run: grep 192.168.39.110	control-plane.minikube.internal$ /etc/hosts
	I0729 20:56:10.164813  778744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.110	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:56:10.175720  778744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:56:10.287673  778744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:56:10.303546  778744 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687 for IP: 192.168.39.110
	I0729 20:56:10.303577  778744 certs.go:194] generating shared ca certs ...
	I0729 20:56:10.303604  778744 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:56:10.303797  778744 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:56:10.303836  778744 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:56:10.303858  778744 certs.go:256] generating profile certs ...
	I0729 20:56:10.303946  778744 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/client.key
	I0729 20:56:10.304007  778744 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/apiserver.key.ac47a258
	I0729 20:56:10.304084  778744 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/proxy-client.key
	I0729 20:56:10.304251  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:56:10.304285  778744 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:56:10.304295  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:56:10.304323  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:56:10.304346  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:56:10.304370  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:56:10.304409  778744 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:56:10.305139  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:56:10.347468  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:56:10.373991  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:56:10.402852  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:56:10.432449  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 20:56:10.457179  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 20:56:10.482588  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:56:10.516597  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 20:56:10.538934  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:56:10.560967  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:56:10.582788  778744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:56:10.604546  778744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:56:10.620952  778744 ssh_runner.go:195] Run: openssl version
	I0729 20:56:10.626824  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:56:10.636530  778744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:56:10.640542  778744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:56:10.640592  778744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:56:10.645865  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:56:10.655363  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:56:10.665166  778744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:56:10.669129  778744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:56:10.669175  778744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:56:10.674478  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:56:10.684194  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:56:10.693593  778744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:56:10.697499  778744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:56:10.697553  778744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:56:10.702581  778744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:56:10.712168  778744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:56:10.716175  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 20:56:10.721665  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 20:56:10.726963  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 20:56:10.732285  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 20:56:10.737422  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 20:56:10.742849  778744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 20:56:10.748160  778744 kubeadm.go:392] StartCluster: {Name:test-preload-596687 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-596687 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:56:10.748295  778744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:56:10.748349  778744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:56:10.783507  778744 cri.go:89] found id: ""
	I0729 20:56:10.783595  778744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 20:56:10.792930  778744 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 20:56:10.792955  778744 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 20:56:10.793015  778744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 20:56:10.801867  778744 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:56:10.802324  778744 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-596687" does not appear in /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:56:10.802461  778744 kubeconfig.go:62] /home/jenkins/minikube-integration/19344-733808/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-596687" cluster setting kubeconfig missing "test-preload-596687" context setting]
	I0729 20:56:10.802773  778744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:56:10.803421  778744 kapi.go:59] client config for test-preload-596687: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 20:56:10.804089  778744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 20:56:10.812617  778744 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.110
	I0729 20:56:10.812652  778744 kubeadm.go:1160] stopping kube-system containers ...
	I0729 20:56:10.812676  778744 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0729 20:56:10.812724  778744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:56:10.848716  778744 cri.go:89] found id: ""
	I0729 20:56:10.848790  778744 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 20:56:10.865070  778744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 20:56:10.873872  778744 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 20:56:10.873890  778744 kubeadm.go:157] found existing configuration files:
	
	I0729 20:56:10.873939  778744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 20:56:10.882356  778744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 20:56:10.882422  778744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 20:56:10.890945  778744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 20:56:10.899114  778744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 20:56:10.899168  778744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 20:56:10.907607  778744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 20:56:10.916289  778744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 20:56:10.916342  778744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 20:56:10.925060  778744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 20:56:10.933209  778744 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 20:56:10.933277  778744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 20:56:10.941948  778744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 20:56:10.950693  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:11.038726  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:11.751365  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:12.003025  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:12.068423  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:12.158110  778744 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:56:12.158201  778744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:56:12.659027  778744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:56:13.158326  778744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:56:13.172791  778744 api_server.go:72] duration metric: took 1.014677365s to wait for apiserver process to appear ...
	I0729 20:56:13.172822  778744 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:56:13.172874  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:13.173371  778744 api_server.go:269] stopped: https://192.168.39.110:8443/healthz: Get "https://192.168.39.110:8443/healthz": dial tcp 192.168.39.110:8443: connect: connection refused
	I0729 20:56:13.672952  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:16.484592  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 20:56:16.484651  778744 api_server.go:103] status: https://192.168.39.110:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 20:56:16.484670  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:16.513145  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 20:56:16.513183  778744 api_server.go:103] status: https://192.168.39.110:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 20:56:16.673439  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:16.679871  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 20:56:16.679900  778744 api_server.go:103] status: https://192.168.39.110:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 20:56:17.173098  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:17.181472  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 20:56:17.181510  778744 api_server.go:103] status: https://192.168.39.110:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 20:56:17.673647  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:17.680285  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I0729 20:56:17.689195  778744 api_server.go:141] control plane version: v1.24.4
	I0729 20:56:17.689224  778744 api_server.go:131] duration metric: took 4.516368511s to wait for apiserver health ...
	I0729 20:56:17.689235  778744 cni.go:84] Creating CNI manager for ""
	I0729 20:56:17.689243  778744 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:56:17.690750  778744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 20:56:17.692079  778744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 20:56:17.702781  778744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 20:56:17.722467  778744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:56:17.738802  778744 system_pods.go:59] 7 kube-system pods found
	I0729 20:56:17.738840  778744 system_pods.go:61] "coredns-6d4b75cb6d-x5zll" [3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e] Running
	I0729 20:56:17.738851  778744 system_pods.go:61] "etcd-test-preload-596687" [21949f97-efa6-4541-a241-396234cb9e36] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 20:56:17.738856  778744 system_pods.go:61] "kube-apiserver-test-preload-596687" [685d3205-8c3e-41d1-bf25-59fe0fa19bed] Running
	I0729 20:56:17.738866  778744 system_pods.go:61] "kube-controller-manager-test-preload-596687" [b3dc9a35-a90a-4c78-863e-bff60f650c74] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 20:56:17.738870  778744 system_pods.go:61] "kube-proxy-6p25c" [c0629865-b070-43ae-a812-e4d558fa1266] Running
	I0729 20:56:17.738873  778744 system_pods.go:61] "kube-scheduler-test-preload-596687" [d3bff6cc-b538-4ee2-9757-b0aa4523bd62] Running
	I0729 20:56:17.738880  778744 system_pods.go:61] "storage-provisioner" [26d5adb8-42e6-4841-a873-cee95e014e06] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0729 20:56:17.738887  778744 system_pods.go:74] duration metric: took 16.398252ms to wait for pod list to return data ...
	I0729 20:56:17.738894  778744 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:56:17.743305  778744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:56:17.743335  778744 node_conditions.go:123] node cpu capacity is 2
	I0729 20:56:17.743345  778744 node_conditions.go:105] duration metric: took 4.446732ms to run NodePressure ...
	I0729 20:56:17.743369  778744 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 20:56:17.908892  778744 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0729 20:56:17.915927  778744 kubeadm.go:739] kubelet initialised
	I0729 20:56:17.915953  778744 kubeadm.go:740] duration metric: took 7.036365ms waiting for restarted kubelet to initialise ...
	I0729 20:56:17.915962  778744 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:56:17.923430  778744 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:17.930946  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.930973  778744 pod_ready.go:81] duration metric: took 7.514437ms for pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:17.930981  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.930989  778744 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:17.936804  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "etcd-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.936833  778744 pod_ready.go:81] duration metric: took 5.835409ms for pod "etcd-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:17.936843  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "etcd-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.936851  778744 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:17.942273  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "kube-apiserver-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.942300  778744 pod_ready.go:81] duration metric: took 5.442892ms for pod "kube-apiserver-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:17.942310  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "kube-apiserver-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:17.942321  778744 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:18.130214  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.130248  778744 pod_ready.go:81] duration metric: took 187.917458ms for pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:18.130259  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.130266  778744 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6p25c" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:18.526606  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "kube-proxy-6p25c" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.526638  778744 pod_ready.go:81] duration metric: took 396.363272ms for pod "kube-proxy-6p25c" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:18.526647  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "kube-proxy-6p25c" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.526654  778744 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:18.926771  778744 pod_ready.go:97] node "test-preload-596687" hosting pod "kube-scheduler-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.926807  778744 pod_ready.go:81] duration metric: took 400.145523ms for pod "kube-scheduler-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	E0729 20:56:18.926818  778744 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-596687" hosting pod "kube-scheduler-test-preload-596687" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:18.926826  778744 pod_ready.go:38] duration metric: took 1.010855125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:56:18.926847  778744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 20:56:18.939397  778744 ops.go:34] apiserver oom_adj: -16
	I0729 20:56:18.939421  778744 kubeadm.go:597] duration metric: took 8.146458925s to restartPrimaryControlPlane
	I0729 20:56:18.939430  778744 kubeadm.go:394] duration metric: took 8.19128491s to StartCluster
	I0729 20:56:18.939448  778744 settings.go:142] acquiring lock: {Name:mk9a2eb797f60b19768f4bfa250a8d2214a5ca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:56:18.939545  778744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:56:18.940237  778744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:56:18.940500  778744 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.110 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:56:18.940580  778744 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 20:56:18.940690  778744 addons.go:69] Setting storage-provisioner=true in profile "test-preload-596687"
	I0729 20:56:18.940695  778744 config.go:182] Loaded profile config "test-preload-596687": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0729 20:56:18.940721  778744 addons.go:234] Setting addon storage-provisioner=true in "test-preload-596687"
	I0729 20:56:18.940725  778744 addons.go:69] Setting default-storageclass=true in profile "test-preload-596687"
	W0729 20:56:18.940733  778744 addons.go:243] addon storage-provisioner should already be in state true
	I0729 20:56:18.940752  778744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-596687"
	I0729 20:56:18.940773  778744 host.go:66] Checking if "test-preload-596687" exists ...
	I0729 20:56:18.941078  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:56:18.941127  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:56:18.941169  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:56:18.941211  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:56:18.942576  778744 out.go:177] * Verifying Kubernetes components...
	I0729 20:56:18.944150  778744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:56:18.957540  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46581
	I0729 20:56:18.957617  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43677
	I0729 20:56:18.958130  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:56:18.958145  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:56:18.958810  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:56:18.958856  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:56:18.958923  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:56:18.958942  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:56:18.959249  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:56:18.959287  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:56:18.959524  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetState
	I0729 20:56:18.959891  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:56:18.959940  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:56:18.962551  778744 kapi.go:59] client config for test-preload-596687: &rest.Config{Host:"https://192.168.39.110:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/test-preload-596687/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 20:56:18.962868  778744 addons.go:234] Setting addon default-storageclass=true in "test-preload-596687"
	W0729 20:56:18.962885  778744 addons.go:243] addon default-storageclass should already be in state true
	I0729 20:56:18.962915  778744 host.go:66] Checking if "test-preload-596687" exists ...
	I0729 20:56:18.963236  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:56:18.963277  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:56:18.975173  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44583
	I0729 20:56:18.975691  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:56:18.976277  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:56:18.976301  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:56:18.976706  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:56:18.976931  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetState
	I0729 20:56:18.978678  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:56:18.980056  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I0729 20:56:18.980443  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:56:18.980893  778744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:56:18.980934  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:56:18.980954  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:56:18.981276  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:56:18.981928  778744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:56:18.981975  778744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:56:18.982473  778744 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:56:18.982493  778744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 20:56:18.982511  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:56:18.985604  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:56:18.986085  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:56:18.986113  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:56:18.986371  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:56:18.986589  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:56:18.986765  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:56:18.986914  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:56:18.998327  778744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32865
	I0729 20:56:18.998784  778744 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:56:18.999301  778744 main.go:141] libmachine: Using API Version  1
	I0729 20:56:18.999333  778744 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:56:18.999685  778744 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:56:18.999868  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetState
	I0729 20:56:19.001593  778744 main.go:141] libmachine: (test-preload-596687) Calling .DriverName
	I0729 20:56:19.001830  778744 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 20:56:19.001850  778744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 20:56:19.001871  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHHostname
	I0729 20:56:19.004470  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:56:19.004901  778744 main.go:141] libmachine: (test-preload-596687) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:80:81:73", ip: ""} in network mk-test-preload-596687: {Iface:virbr1 ExpiryTime:2024-07-29 21:55:49 +0000 UTC Type:0 Mac:52:54:00:80:81:73 Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:test-preload-596687 Clientid:01:52:54:00:80:81:73}
	I0729 20:56:19.004934  778744 main.go:141] libmachine: (test-preload-596687) DBG | domain test-preload-596687 has defined IP address 192.168.39.110 and MAC address 52:54:00:80:81:73 in network mk-test-preload-596687
	I0729 20:56:19.005114  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHPort
	I0729 20:56:19.005304  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHKeyPath
	I0729 20:56:19.005496  778744 main.go:141] libmachine: (test-preload-596687) Calling .GetSSHUsername
	I0729 20:56:19.005666  778744 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/test-preload-596687/id_rsa Username:docker}
	I0729 20:56:19.137710  778744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:56:19.155303  778744 node_ready.go:35] waiting up to 6m0s for node "test-preload-596687" to be "Ready" ...
	I0729 20:56:19.221606  778744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 20:56:19.324020  778744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 20:56:20.233590  778744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.011936731s)
	I0729 20:56:20.233669  778744 main.go:141] libmachine: Making call to close driver server
	I0729 20:56:20.233691  778744 main.go:141] libmachine: (test-preload-596687) Calling .Close
	I0729 20:56:20.233982  778744 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:56:20.234008  778744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:56:20.234022  778744 main.go:141] libmachine: (test-preload-596687) DBG | Closing plugin on server side
	I0729 20:56:20.234030  778744 main.go:141] libmachine: Making call to close driver server
	I0729 20:56:20.234045  778744 main.go:141] libmachine: (test-preload-596687) Calling .Close
	I0729 20:56:20.234307  778744 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:56:20.234329  778744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:56:20.240587  778744 main.go:141] libmachine: Making call to close driver server
	I0729 20:56:20.240604  778744 main.go:141] libmachine: (test-preload-596687) Calling .Close
	I0729 20:56:20.240884  778744 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:56:20.240899  778744 main.go:141] libmachine: (test-preload-596687) DBG | Closing plugin on server side
	I0729 20:56:20.240906  778744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:56:20.273897  778744 main.go:141] libmachine: Making call to close driver server
	I0729 20:56:20.273930  778744 main.go:141] libmachine: (test-preload-596687) Calling .Close
	I0729 20:56:20.274260  778744 main.go:141] libmachine: (test-preload-596687) DBG | Closing plugin on server side
	I0729 20:56:20.274300  778744 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:56:20.274317  778744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:56:20.274343  778744 main.go:141] libmachine: Making call to close driver server
	I0729 20:56:20.274356  778744 main.go:141] libmachine: (test-preload-596687) Calling .Close
	I0729 20:56:20.274609  778744 main.go:141] libmachine: Successfully made call to close driver server
	I0729 20:56:20.274629  778744 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 20:56:20.274643  778744 main.go:141] libmachine: (test-preload-596687) DBG | Closing plugin on server side
	I0729 20:56:20.276796  778744 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 20:56:20.278357  778744 addons.go:510] duration metric: took 1.33778697s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 20:56:21.159777  778744 node_ready.go:53] node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:23.659556  778744 node_ready.go:53] node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:26.158768  778744 node_ready.go:53] node "test-preload-596687" has status "Ready":"False"
	I0729 20:56:27.159096  778744 node_ready.go:49] node "test-preload-596687" has status "Ready":"True"
	I0729 20:56:27.159125  778744 node_ready.go:38] duration metric: took 8.003778501s for node "test-preload-596687" to be "Ready" ...
	I0729 20:56:27.159137  778744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:56:27.165051  778744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:27.170426  778744 pod_ready.go:92] pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:27.170455  778744 pod_ready.go:81] duration metric: took 5.377058ms for pod "coredns-6d4b75cb6d-x5zll" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:27.170467  778744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:29.176728  778744 pod_ready.go:102] pod "etcd-test-preload-596687" in "kube-system" namespace has status "Ready":"False"
	I0729 20:56:31.677235  778744 pod_ready.go:102] pod "etcd-test-preload-596687" in "kube-system" namespace has status "Ready":"False"
	I0729 20:56:32.678111  778744 pod_ready.go:92] pod "etcd-test-preload-596687" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:32.678144  778744 pod_ready.go:81] duration metric: took 5.507668102s for pod "etcd-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.678158  778744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.682874  778744 pod_ready.go:92] pod "kube-apiserver-test-preload-596687" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:32.682896  778744 pod_ready.go:81] duration metric: took 4.730046ms for pod "kube-apiserver-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.682906  778744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.686987  778744 pod_ready.go:92] pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:32.687010  778744 pod_ready.go:81] duration metric: took 4.098327ms for pod "kube-controller-manager-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.687019  778744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6p25c" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.690791  778744 pod_ready.go:92] pod "kube-proxy-6p25c" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:32.690808  778744 pod_ready.go:81] duration metric: took 3.783443ms for pod "kube-proxy-6p25c" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.690816  778744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.694610  778744 pod_ready.go:92] pod "kube-scheduler-test-preload-596687" in "kube-system" namespace has status "Ready":"True"
	I0729 20:56:32.694626  778744 pod_ready.go:81] duration metric: took 3.805165ms for pod "kube-scheduler-test-preload-596687" in "kube-system" namespace to be "Ready" ...
	I0729 20:56:32.694634  778744 pod_ready.go:38] duration metric: took 5.535486376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 20:56:32.694648  778744 api_server.go:52] waiting for apiserver process to appear ...
	I0729 20:56:32.694695  778744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:56:32.709630  778744 api_server.go:72] duration metric: took 13.769090606s to wait for apiserver process to appear ...
	I0729 20:56:32.709657  778744 api_server.go:88] waiting for apiserver healthz status ...
	I0729 20:56:32.709676  778744 api_server.go:253] Checking apiserver healthz at https://192.168.39.110:8443/healthz ...
	I0729 20:56:32.714607  778744 api_server.go:279] https://192.168.39.110:8443/healthz returned 200:
	ok
	I0729 20:56:32.715496  778744 api_server.go:141] control plane version: v1.24.4
	I0729 20:56:32.715517  778744 api_server.go:131] duration metric: took 5.854539ms to wait for apiserver health ...
	I0729 20:56:32.715525  778744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 20:56:32.876844  778744 system_pods.go:59] 7 kube-system pods found
	I0729 20:56:32.876873  778744 system_pods.go:61] "coredns-6d4b75cb6d-x5zll" [3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e] Running
	I0729 20:56:32.876877  778744 system_pods.go:61] "etcd-test-preload-596687" [21949f97-efa6-4541-a241-396234cb9e36] Running
	I0729 20:56:32.876881  778744 system_pods.go:61] "kube-apiserver-test-preload-596687" [685d3205-8c3e-41d1-bf25-59fe0fa19bed] Running
	I0729 20:56:32.876884  778744 system_pods.go:61] "kube-controller-manager-test-preload-596687" [b3dc9a35-a90a-4c78-863e-bff60f650c74] Running
	I0729 20:56:32.876886  778744 system_pods.go:61] "kube-proxy-6p25c" [c0629865-b070-43ae-a812-e4d558fa1266] Running
	I0729 20:56:32.876889  778744 system_pods.go:61] "kube-scheduler-test-preload-596687" [d3bff6cc-b538-4ee2-9757-b0aa4523bd62] Running
	I0729 20:56:32.876892  778744 system_pods.go:61] "storage-provisioner" [26d5adb8-42e6-4841-a873-cee95e014e06] Running
	I0729 20:56:32.876900  778744 system_pods.go:74] duration metric: took 161.370253ms to wait for pod list to return data ...
	I0729 20:56:32.876907  778744 default_sa.go:34] waiting for default service account to be created ...
	I0729 20:56:33.075008  778744 default_sa.go:45] found service account: "default"
	I0729 20:56:33.075037  778744 default_sa.go:55] duration metric: took 198.124126ms for default service account to be created ...
	I0729 20:56:33.075045  778744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 20:56:33.277072  778744 system_pods.go:86] 7 kube-system pods found
	I0729 20:56:33.277102  778744 system_pods.go:89] "coredns-6d4b75cb6d-x5zll" [3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e] Running
	I0729 20:56:33.277107  778744 system_pods.go:89] "etcd-test-preload-596687" [21949f97-efa6-4541-a241-396234cb9e36] Running
	I0729 20:56:33.277111  778744 system_pods.go:89] "kube-apiserver-test-preload-596687" [685d3205-8c3e-41d1-bf25-59fe0fa19bed] Running
	I0729 20:56:33.277116  778744 system_pods.go:89] "kube-controller-manager-test-preload-596687" [b3dc9a35-a90a-4c78-863e-bff60f650c74] Running
	I0729 20:56:33.277120  778744 system_pods.go:89] "kube-proxy-6p25c" [c0629865-b070-43ae-a812-e4d558fa1266] Running
	I0729 20:56:33.277124  778744 system_pods.go:89] "kube-scheduler-test-preload-596687" [d3bff6cc-b538-4ee2-9757-b0aa4523bd62] Running
	I0729 20:56:33.277127  778744 system_pods.go:89] "storage-provisioner" [26d5adb8-42e6-4841-a873-cee95e014e06] Running
	I0729 20:56:33.277133  778744 system_pods.go:126] duration metric: took 202.0832ms to wait for k8s-apps to be running ...
	I0729 20:56:33.277145  778744 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 20:56:33.277196  778744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:56:33.291444  778744 system_svc.go:56] duration metric: took 14.293212ms WaitForService to wait for kubelet
	I0729 20:56:33.291476  778744 kubeadm.go:582] duration metric: took 14.350941448s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 20:56:33.291495  778744 node_conditions.go:102] verifying NodePressure condition ...
	I0729 20:56:33.474953  778744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 20:56:33.474981  778744 node_conditions.go:123] node cpu capacity is 2
	I0729 20:56:33.474994  778744 node_conditions.go:105] duration metric: took 183.494495ms to run NodePressure ...
	I0729 20:56:33.475006  778744 start.go:241] waiting for startup goroutines ...
	I0729 20:56:33.475013  778744 start.go:246] waiting for cluster config update ...
	I0729 20:56:33.475022  778744 start.go:255] writing updated cluster config ...
	I0729 20:56:33.475326  778744 ssh_runner.go:195] Run: rm -f paused
	I0729 20:56:33.524701  778744 start.go:600] kubectl: 1.30.3, cluster: 1.24.4 (minor skew: 6)
	I0729 20:56:33.527165  778744 out.go:177] 
	W0729 20:56:33.528468  778744 out.go:239] ! /usr/local/bin/kubectl is version 1.30.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0729 20:56:33.529760  778744 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0729 20:56:33.531045  778744 out.go:177] * Done! kubectl is now configured to use "test-preload-596687" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.393989142Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286594393962990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3446dca-dc46-4de7-beb1-6a54830131eb name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.394680588Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=58e25aaa-419f-42ac-a7a8-4331ee81fddc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.394748241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=58e25aaa-419f-42ac-a7a8-4331ee81fddc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.394981533Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b919ca06707c9ec592a76d4a8459e0a1784121e76b2dbb826c07b95a13c0500,PodSandboxId:501cbe93c568e4ce781c924a3567c2d5429a853f7bc6fadbe0b99923aa274d25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722286585221475756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5zll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e,},Annotations:map[string]string{io.kubernetes.container.hash: e0b72050,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42de784f2141f1214db68e68cd4f49c2e697150b7e3205202dba79ee949df3ac,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722286579280418933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722286578155036110,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e0d09cd1c66ea0c0055f38d965a805131d2b4b2be3796978e1d76f4548b7d4,PodSandboxId:d31b193f4d9ababf8280703885c68c096c16e368b634aaef4520a5100f15d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722286578147952399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p25c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0629865-b070-4
3ae-a812-e4d558fa1266,},Annotations:map[string]string{io.kubernetes.container.hash: 729d7d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e87af1c4ce02b464a1824da8e2cf6ad4a0945cf917f7b0975b6a74e6945104,PodSandboxId:12c2ba699fceee2357815828afd5944677eddc518ea3b016f257929f8f722c5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722286572825685742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c705f4f5062ce6b67ab5cba5a79566d9,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6306e6c03cc21b17147b866c5f3309233fcf4993aa6e33dbcde574d71234b7f,PodSandboxId:c7fa8b969d501cf930b31ae97c1494fa6ac70e9ea33042d35fbbcfe9e80979ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722286572850481884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c4797d57c43251a34add3abfa560b,},A
nnotations:map[string]string{io.kubernetes.container.hash: d544ada8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55d73f03f339a1c1e829b632ddd10b021e5751bb689f29e68519021a08bcab5,PodSandboxId:a25e5bf092d1d3bec1b224d463607ad408bb47c2cc3e76bbc84f42f0167c6fbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722286572819355410,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acbff5f33fae664769315203b6833a52,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:324ad32ef32a42d3e31fcf7a94561fcabb3ba469e31db67943697306493b788b,PodSandboxId:4aa0ab37e99c59fcb8f8aa9e8b8663e8199f7ba2344816938ef9a76ef58d632a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722286572800046899,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ba9769d502d816fdcffc1b86019e4b,},Annotations:map[string]
string{io.kubernetes.container.hash: 631d35b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=58e25aaa-419f-42ac-a7a8-4331ee81fddc name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.429652374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e034e6af-39e6-4c7e-bd98-84421356eb98 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.429743894Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e034e6af-39e6-4c7e-bd98-84421356eb98 name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.430853966Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af4213e8-565a-49c0-bb5e-7e7515f9d009 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.431270373Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286594431247318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af4213e8-565a-49c0-bb5e-7e7515f9d009 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.431810312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9257474a-4003-44ba-8a1d-7cfaaf5abc76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.431874532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9257474a-4003-44ba-8a1d-7cfaaf5abc76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.432208217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b919ca06707c9ec592a76d4a8459e0a1784121e76b2dbb826c07b95a13c0500,PodSandboxId:501cbe93c568e4ce781c924a3567c2d5429a853f7bc6fadbe0b99923aa274d25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722286585221475756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5zll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e,},Annotations:map[string]string{io.kubernetes.container.hash: e0b72050,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42de784f2141f1214db68e68cd4f49c2e697150b7e3205202dba79ee949df3ac,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722286579280418933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722286578155036110,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e0d09cd1c66ea0c0055f38d965a805131d2b4b2be3796978e1d76f4548b7d4,PodSandboxId:d31b193f4d9ababf8280703885c68c096c16e368b634aaef4520a5100f15d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722286578147952399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p25c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0629865-b070-4
3ae-a812-e4d558fa1266,},Annotations:map[string]string{io.kubernetes.container.hash: 729d7d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e87af1c4ce02b464a1824da8e2cf6ad4a0945cf917f7b0975b6a74e6945104,PodSandboxId:12c2ba699fceee2357815828afd5944677eddc518ea3b016f257929f8f722c5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722286572825685742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c705f4f5062ce6b67ab5cba5a79566d9,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6306e6c03cc21b17147b866c5f3309233fcf4993aa6e33dbcde574d71234b7f,PodSandboxId:c7fa8b969d501cf930b31ae97c1494fa6ac70e9ea33042d35fbbcfe9e80979ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722286572850481884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c4797d57c43251a34add3abfa560b,},A
nnotations:map[string]string{io.kubernetes.container.hash: d544ada8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55d73f03f339a1c1e829b632ddd10b021e5751bb689f29e68519021a08bcab5,PodSandboxId:a25e5bf092d1d3bec1b224d463607ad408bb47c2cc3e76bbc84f42f0167c6fbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722286572819355410,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acbff5f33fae664769315203b6833a52,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:324ad32ef32a42d3e31fcf7a94561fcabb3ba469e31db67943697306493b788b,PodSandboxId:4aa0ab37e99c59fcb8f8aa9e8b8663e8199f7ba2344816938ef9a76ef58d632a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722286572800046899,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ba9769d502d816fdcffc1b86019e4b,},Annotations:map[string]
string{io.kubernetes.container.hash: 631d35b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9257474a-4003-44ba-8a1d-7cfaaf5abc76 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.466154306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=604c931c-d808-4201-bcf1-b62f0b5b208f name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.466235754Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=604c931c-d808-4201-bcf1-b62f0b5b208f name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.467134738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0cef01e2-0f5e-4ce1-88d5-12dc0a4c011b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.467614903Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286594467592092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0cef01e2-0f5e-4ce1-88d5-12dc0a4c011b name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.468066901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7652a975-c219-4bb7-a385-c694d303ecdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.468133301Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7652a975-c219-4bb7-a385-c694d303ecdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.468315512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b919ca06707c9ec592a76d4a8459e0a1784121e76b2dbb826c07b95a13c0500,PodSandboxId:501cbe93c568e4ce781c924a3567c2d5429a853f7bc6fadbe0b99923aa274d25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722286585221475756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5zll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e,},Annotations:map[string]string{io.kubernetes.container.hash: e0b72050,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42de784f2141f1214db68e68cd4f49c2e697150b7e3205202dba79ee949df3ac,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722286579280418933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722286578155036110,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e0d09cd1c66ea0c0055f38d965a805131d2b4b2be3796978e1d76f4548b7d4,PodSandboxId:d31b193f4d9ababf8280703885c68c096c16e368b634aaef4520a5100f15d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722286578147952399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p25c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0629865-b070-4
3ae-a812-e4d558fa1266,},Annotations:map[string]string{io.kubernetes.container.hash: 729d7d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e87af1c4ce02b464a1824da8e2cf6ad4a0945cf917f7b0975b6a74e6945104,PodSandboxId:12c2ba699fceee2357815828afd5944677eddc518ea3b016f257929f8f722c5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722286572825685742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c705f4f5062ce6b67ab5cba5a79566d9,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6306e6c03cc21b17147b866c5f3309233fcf4993aa6e33dbcde574d71234b7f,PodSandboxId:c7fa8b969d501cf930b31ae97c1494fa6ac70e9ea33042d35fbbcfe9e80979ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722286572850481884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c4797d57c43251a34add3abfa560b,},A
nnotations:map[string]string{io.kubernetes.container.hash: d544ada8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55d73f03f339a1c1e829b632ddd10b021e5751bb689f29e68519021a08bcab5,PodSandboxId:a25e5bf092d1d3bec1b224d463607ad408bb47c2cc3e76bbc84f42f0167c6fbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722286572819355410,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acbff5f33fae664769315203b6833a52,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:324ad32ef32a42d3e31fcf7a94561fcabb3ba469e31db67943697306493b788b,PodSandboxId:4aa0ab37e99c59fcb8f8aa9e8b8663e8199f7ba2344816938ef9a76ef58d632a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722286572800046899,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ba9769d502d816fdcffc1b86019e4b,},Annotations:map[string]
string{io.kubernetes.container.hash: 631d35b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7652a975-c219-4bb7-a385-c694d303ecdd name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.498609654Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b63f33d1-5a20-4f5b-8333-8e92fce4269f name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.498697482Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b63f33d1-5a20-4f5b-8333-8e92fce4269f name=/runtime.v1.RuntimeService/Version
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.499710592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8f8b44d5-d061-473a-8cf9-fea31f07b3e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.500178593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722286594500157267,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8f8b44d5-d061-473a-8cf9-fea31f07b3e9 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.500647563Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7b100502-efed-4a9a-b8e7-33d393d73cec name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.500709671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7b100502-efed-4a9a-b8e7-33d393d73cec name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 20:56:34 test-preload-596687 crio[675]: time="2024-07-29 20:56:34.500948963Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3b919ca06707c9ec592a76d4a8459e0a1784121e76b2dbb826c07b95a13c0500,PodSandboxId:501cbe93c568e4ce781c924a3567c2d5429a853f7bc6fadbe0b99923aa274d25,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1722286585221475756,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-x5zll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e,},Annotations:map[string]string{io.kubernetes.container.hash: e0b72050,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42de784f2141f1214db68e68cd4f49c2e697150b7e3205202dba79ee949df3ac,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722286579280418933,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb,PodSandboxId:d97d4778dc563073d2083836c0374cea6d3cad6a8cb229a839e7bcdcb6d6b3ee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722286578155036110,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 26d5adb8-42e6-4841-a873-cee95e014e06,},Annotations:map[string]string{io.kubernetes.container.hash: b7625691,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5e0d09cd1c66ea0c0055f38d965a805131d2b4b2be3796978e1d76f4548b7d4,PodSandboxId:d31b193f4d9ababf8280703885c68c096c16e368b634aaef4520a5100f15d27a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1722286578147952399,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6p25c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c0629865-b070-4
3ae-a812-e4d558fa1266,},Annotations:map[string]string{io.kubernetes.container.hash: 729d7d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17e87af1c4ce02b464a1824da8e2cf6ad4a0945cf917f7b0975b6a74e6945104,PodSandboxId:12c2ba699fceee2357815828afd5944677eddc518ea3b016f257929f8f722c5f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1722286572825685742,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c705f4f5062ce6b67ab5cba5a79566d9,},Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f6306e6c03cc21b17147b866c5f3309233fcf4993aa6e33dbcde574d71234b7f,PodSandboxId:c7fa8b969d501cf930b31ae97c1494fa6ac70e9ea33042d35fbbcfe9e80979ef,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1722286572850481884,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a39c4797d57c43251a34add3abfa560b,},A
nnotations:map[string]string{io.kubernetes.container.hash: d544ada8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c55d73f03f339a1c1e829b632ddd10b021e5751bb689f29e68519021a08bcab5,PodSandboxId:a25e5bf092d1d3bec1b224d463607ad408bb47c2cc3e76bbc84f42f0167c6fbd,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1722286572819355410,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acbff5f33fae664769315203b6833a52,},Annotations:
map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:324ad32ef32a42d3e31fcf7a94561fcabb3ba469e31db67943697306493b788b,PodSandboxId:4aa0ab37e99c59fcb8f8aa9e8b8663e8199f7ba2344816938ef9a76ef58d632a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1722286572800046899,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-596687,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 18ba9769d502d816fdcffc1b86019e4b,},Annotations:map[string]
string{io.kubernetes.container.hash: 631d35b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7b100502-efed-4a9a-b8e7-33d393d73cec name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3b919ca06707c       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   9 seconds ago       Running             coredns                   1                   501cbe93c568e       coredns-6d4b75cb6d-x5zll
	42de784f2141f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 seconds ago      Running             storage-provisioner       2                   d97d4778dc563       storage-provisioner
	c69062672399f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 seconds ago      Exited              storage-provisioner       1                   d97d4778dc563       storage-provisioner
	a5e0d09cd1c66       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   16 seconds ago      Running             kube-proxy                1                   d31b193f4d9ab       kube-proxy-6p25c
	f6306e6c03cc2       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   c7fa8b969d501       etcd-test-preload-596687
	17e87af1c4ce0       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   12c2ba699fcee       kube-controller-manager-test-preload-596687
	c55d73f03f339       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   a25e5bf092d1d       kube-scheduler-test-preload-596687
	324ad32ef32a4       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   4aa0ab37e99c5       kube-apiserver-test-preload-596687
	
	
	==> coredns [3b919ca06707c9ec592a76d4a8459e0a1784121e76b2dbb826c07b95a13c0500] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:57813 - 23795 "HINFO IN 2461293477210280827.6487772968481563170. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019278956s
	
	
	==> describe nodes <==
	Name:               test-preload-596687
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-596687
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=test-preload-596687
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T20_54_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 20:54:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-596687
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 20:56:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 20:56:26 +0000   Mon, 29 Jul 2024 20:54:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 20:56:26 +0000   Mon, 29 Jul 2024 20:54:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 20:56:26 +0000   Mon, 29 Jul 2024 20:54:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 20:56:26 +0000   Mon, 29 Jul 2024 20:56:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.110
	  Hostname:    test-preload-596687
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 5912b5a1a12c4ba1b1951c101dfce6cc
	  System UUID:                5912b5a1-a12c-4ba1-b195-1c101dfce6cc
	  Boot ID:                    2538f352-6b41-484b-821d-08376f607014
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-x5zll                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     90s
	  kube-system                 etcd-test-preload-596687                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         104s
	  kube-system                 kube-apiserver-test-preload-596687             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	  kube-system                 kube-controller-manager-test-preload-596687    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	  kube-system                 kube-proxy-6p25c                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                 kube-scheduler-test-preload-596687             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16s                kube-proxy       
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s               kubelet          Node test-preload-596687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    104s               kubelet          Node test-preload-596687 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     104s               kubelet          Node test-preload-596687 status is now: NodeHasSufficientPID
	  Normal  NodeReady                94s                kubelet          Node test-preload-596687 status is now: NodeReady
	  Normal  RegisteredNode           91s                node-controller  Node test-preload-596687 event: Registered Node test-preload-596687 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node test-preload-596687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node test-preload-596687 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node test-preload-596687 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5s                 node-controller  Node test-preload-596687 event: Registered Node test-preload-596687 in Controller
	
	
	==> dmesg <==
	[Jul29 20:55] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050498] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.036184] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.667459] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.729411] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.506208] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.947174] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.057060] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053671] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.156056] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.135731] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.278052] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[Jul29 20:56] systemd-fstab-generator[929]: Ignoring "noauto" option for root device
	[  +0.059892] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.641002] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +6.164911] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.946748] systemd-fstab-generator[1701]: Ignoring "noauto" option for root device
	[  +6.011928] kauditd_printk_skb: 64 callbacks suppressed
	
	
	==> etcd [f6306e6c03cc21b17147b866c5f3309233fcf4993aa6e33dbcde574d71234b7f] <==
	{"level":"info","ts":"2024-07-29T20:56:13.165Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"fbb007bab925a598","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-07-29T20:56:13.167Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-07-29T20:56:13.168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 switched to configuration voters=(18136004197972551064)"}
	{"level":"info","ts":"2024-07-29T20:56:13.168Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","added-peer-id":"fbb007bab925a598","added-peer-peer-urls":["https://192.168.39.110:2380"]}
	{"level":"info","ts":"2024-07-29T20:56:13.171Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a3dbfa6decfc8853","local-member-id":"fbb007bab925a598","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:56:13.172Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T20:56:13.173Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T20:56:13.173Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"fbb007bab925a598","initial-advertise-peer-urls":["https://192.168.39.110:2380"],"listen-peer-urls":["https://192.168.39.110:2380"],"advertise-client-urls":["https://192.168.39.110:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.110:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T20:56:13.177Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T20:56:13.177Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-07-29T20:56:13.177Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.110:2380"}
	{"level":"info","ts":"2024-07-29T20:56:14.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T20:56:14.144Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T20:56:14.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgPreVoteResp from fbb007bab925a598 at term 2"}
	{"level":"info","ts":"2024-07-29T20:56:14.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T20:56:14.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 received MsgVoteResp from fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-07-29T20:56:14.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fbb007bab925a598 became leader at term 3"}
	{"level":"info","ts":"2024-07-29T20:56:14.145Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fbb007bab925a598 elected leader fbb007bab925a598 at term 3"}
	{"level":"info","ts":"2024-07-29T20:56:14.149Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"fbb007bab925a598","local-member-attributes":"{Name:test-preload-596687 ClientURLs:[https://192.168.39.110:2379]}","request-path":"/0/members/fbb007bab925a598/attributes","cluster-id":"a3dbfa6decfc8853","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T20:56:14.149Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:56:14.150Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T20:56:14.151Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T20:56:14.162Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.110:2379"}
	{"level":"info","ts":"2024-07-29T20:56:14.165Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T20:56:14.165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 20:56:34 up 0 min,  0 users,  load average: 0.82, 0.24, 0.08
	Linux test-preload-596687 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [324ad32ef32a42d3e31fcf7a94561fcabb3ba469e31db67943697306493b788b] <==
	I0729 20:56:16.464052       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0729 20:56:16.467597       1 apf_controller.go:317] Starting API Priority and Fairness config controller
	I0729 20:56:16.437725       1 autoregister_controller.go:141] Starting autoregister controller
	I0729 20:56:16.470968       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0729 20:56:16.444303       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 20:56:16.444315       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 20:56:16.558497       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 20:56:16.567795       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0729 20:56:16.571632       1 cache.go:39] Caches are synced for autoregister controller
	I0729 20:56:16.578997       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0729 20:56:16.587609       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0729 20:56:16.600667       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0729 20:56:16.602403       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 20:56:16.607500       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0729 20:56:16.626593       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 20:56:17.126077       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 20:56:17.438215       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 20:56:17.822247       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0729 20:56:17.834057       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0729 20:56:17.871579       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0729 20:56:17.887513       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 20:56:17.893317       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0729 20:56:18.445057       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0729 20:56:29.450387       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 20:56:29.498478       1 controller.go:611] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [17e87af1c4ce02b464a1824da8e2cf6ad4a0945cf917f7b0975b6a74e6945104] <==
	I0729 20:56:29.269833       1 shared_informer.go:262] Caches are synced for crt configmap
	I0729 20:56:29.271052       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0729 20:56:29.273422       1 shared_informer.go:262] Caches are synced for GC
	I0729 20:56:29.274585       1 shared_informer.go:262] Caches are synced for taint
	I0729 20:56:29.274678       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0729 20:56:29.274703       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0729 20:56:29.274931       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-596687. Assuming now as a timestamp.
	I0729 20:56:29.274978       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0729 20:56:29.275307       1 event.go:294] "Event occurred" object="test-preload-596687" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-596687 event: Registered Node test-preload-596687 in Controller"
	I0729 20:56:29.277885       1 shared_informer.go:262] Caches are synced for attach detach
	I0729 20:56:29.279973       1 shared_informer.go:262] Caches are synced for deployment
	I0729 20:56:29.280814       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0729 20:56:29.286575       1 shared_informer.go:262] Caches are synced for daemon sets
	I0729 20:56:29.287253       1 shared_informer.go:262] Caches are synced for persistent volume
	I0729 20:56:29.289006       1 shared_informer.go:262] Caches are synced for TTL
	I0729 20:56:29.353840       1 shared_informer.go:262] Caches are synced for disruption
	I0729 20:56:29.353971       1 disruption.go:371] Sending events to api server.
	I0729 20:56:29.437223       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0729 20:56:29.471502       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0729 20:56:29.482869       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 20:56:29.487735       1 shared_informer.go:262] Caches are synced for endpoint
	I0729 20:56:29.520096       1 shared_informer.go:262] Caches are synced for resource quota
	I0729 20:56:29.936300       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 20:56:29.978953       1 shared_informer.go:262] Caches are synced for garbage collector
	I0729 20:56:29.979055       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	
	==> kube-proxy [a5e0d09cd1c66ea0c0055f38d965a805131d2b4b2be3796978e1d76f4548b7d4] <==
	I0729 20:56:18.403680       1 node.go:163] Successfully retrieved node IP: 192.168.39.110
	I0729 20:56:18.403753       1 server_others.go:138] "Detected node IP" address="192.168.39.110"
	I0729 20:56:18.403819       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0729 20:56:18.433717       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0729 20:56:18.433749       1 server_others.go:206] "Using iptables Proxier"
	I0729 20:56:18.434258       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0729 20:56:18.434921       1 server.go:661] "Version info" version="v1.24.4"
	I0729 20:56:18.434946       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:56:18.437287       1 config.go:317] "Starting service config controller"
	I0729 20:56:18.437486       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0729 20:56:18.437518       1 config.go:226] "Starting endpoint slice config controller"
	I0729 20:56:18.437573       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0729 20:56:18.438875       1 config.go:444] "Starting node config controller"
	I0729 20:56:18.438896       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0729 20:56:18.538535       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0729 20:56:18.538661       1 shared_informer.go:262] Caches are synced for service config
	I0729 20:56:18.538928       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [c55d73f03f339a1c1e829b632ddd10b021e5751bb689f29e68519021a08bcab5] <==
	I0729 20:56:13.473959       1 serving.go:348] Generated self-signed cert in-memory
	W0729 20:56:16.487633       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 20:56:16.487705       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 20:56:16.487718       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 20:56:16.487725       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 20:56:16.609220       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0729 20:56:16.609251       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 20:56:16.618378       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0729 20:56:16.618612       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 20:56:16.618661       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 20:56:16.618696       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 20:56:16.719073       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.112934    1064 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.113040    1064 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.113092    1064 topology_manager.go:200] "Topology Admit Handler"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.114759    1064 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-x5zll" podUID=3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.178929    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c0629865-b070-43ae-a812-e4d558fa1266-kube-proxy\") pod \"kube-proxy-6p25c\" (UID: \"c0629865-b070-43ae-a812-e4d558fa1266\") " pod="kube-system/kube-proxy-6p25c"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179028    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c0629865-b070-43ae-a812-e4d558fa1266-xtables-lock\") pod \"kube-proxy-6p25c\" (UID: \"c0629865-b070-43ae-a812-e4d558fa1266\") " pod="kube-system/kube-proxy-6p25c"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179058    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/26d5adb8-42e6-4841-a873-cee95e014e06-tmp\") pod \"storage-provisioner\" (UID: \"26d5adb8-42e6-4841-a873-cee95e014e06\") " pod="kube-system/storage-provisioner"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179080    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8975r\" (UniqueName: \"kubernetes.io/projected/c0629865-b070-43ae-a812-e4d558fa1266-kube-api-access-8975r\") pod \"kube-proxy-6p25c\" (UID: \"c0629865-b070-43ae-a812-e4d558fa1266\") " pod="kube-system/kube-proxy-6p25c"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179123    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume\") pod \"coredns-6d4b75cb6d-x5zll\" (UID: \"3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e\") " pod="kube-system/coredns-6d4b75cb6d-x5zll"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179149    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xwz\" (UniqueName: \"kubernetes.io/projected/26d5adb8-42e6-4841-a873-cee95e014e06-kube-api-access-s6xwz\") pod \"storage-provisioner\" (UID: \"26d5adb8-42e6-4841-a873-cee95e014e06\") " pod="kube-system/storage-provisioner"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179170    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c0629865-b070-43ae-a812-e4d558fa1266-lib-modules\") pod \"kube-proxy-6p25c\" (UID: \"c0629865-b070-43ae-a812-e4d558fa1266\") " pod="kube-system/kube-proxy-6p25c"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179201    1064 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvlt\" (UniqueName: \"kubernetes.io/projected/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-kube-api-access-5rvlt\") pod \"coredns-6d4b75cb6d-x5zll\" (UID: \"3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e\") " pod="kube-system/coredns-6d4b75cb6d-x5zll"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: I0729 20:56:17.179214    1064 reconciler.go:159] "Reconciler: start to sync state"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.183568    1064 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?"
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.283049    1064 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.283194    1064 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume podName:3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e nodeName:}" failed. No retries permitted until 2024-07-29 20:56:17.783141005 +0000 UTC m=+5.789055070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume") pod "coredns-6d4b75cb6d-x5zll" (UID: "3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e") : object "kube-system"/"coredns" not registered
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.787930    1064 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 20:56:17 test-preload-596687 kubelet[1064]: E0729 20:56:17.788013    1064 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume podName:3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e nodeName:}" failed. No retries permitted until 2024-07-29 20:56:18.787995324 +0000 UTC m=+6.793909388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume") pod "coredns-6d4b75cb6d-x5zll" (UID: "3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e") : object "kube-system"/"coredns" not registered
	Jul 29 20:56:18 test-preload-596687 kubelet[1064]: E0729 20:56:18.796398    1064 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 20:56:18 test-preload-596687 kubelet[1064]: E0729 20:56:18.796520    1064 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume podName:3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e nodeName:}" failed. No retries permitted until 2024-07-29 20:56:20.796502477 +0000 UTC m=+8.802416533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume") pod "coredns-6d4b75cb6d-x5zll" (UID: "3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e") : object "kube-system"/"coredns" not registered
	Jul 29 20:56:19 test-preload-596687 kubelet[1064]: E0729 20:56:19.213378    1064 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-x5zll" podUID=3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e
	Jul 29 20:56:19 test-preload-596687 kubelet[1064]: I0729 20:56:19.264699    1064 scope.go:110] "RemoveContainer" containerID="c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb"
	Jul 29 20:56:20 test-preload-596687 kubelet[1064]: E0729 20:56:20.815703    1064 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 29 20:56:20 test-preload-596687 kubelet[1064]: E0729 20:56:20.815871    1064 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume podName:3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e nodeName:}" failed. No retries permitted until 2024-07-29 20:56:24.815849114 +0000 UTC m=+12.821763178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e-config-volume") pod "coredns-6d4b75cb6d-x5zll" (UID: "3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e") : object "kube-system"/"coredns" not registered
	Jul 29 20:56:21 test-preload-596687 kubelet[1064]: E0729 20:56:21.213659    1064 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-x5zll" podUID=3cb1a81f-2978-42f8-9d7f-40eaf0a9c32e
	
	
	==> storage-provisioner [42de784f2141f1214db68e68cd4f49c2e697150b7e3205202dba79ee949df3ac] <==
	I0729 20:56:19.463391       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 20:56:19.478274       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 20:56:19.479114       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [c69062672399f7341babeb1605a97c1a25f92bdd6c46af57344a003429d057bb] <==
	I0729 20:56:18.263882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0729 20:56:18.266463       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-596687 -n test-preload-596687
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-596687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-596687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-596687
--- FAIL: TestPreload (242.25s)

                                                
                                    
x
+
TestKubernetesUpgrade (484.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m57.075168723s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-171355" primary control-plane node in "kubernetes-upgrade-171355" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:58:27.029414  780237 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:58:27.029584  780237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:58:27.029596  780237 out.go:304] Setting ErrFile to fd 2...
	I0729 20:58:27.029602  780237 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:58:27.029790  780237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:58:27.030371  780237 out.go:298] Setting JSON to false
	I0729 20:58:27.031351  780237 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":16854,"bootTime":1722269853,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:58:27.031428  780237 start.go:139] virtualization: kvm guest
	I0729 20:58:27.033966  780237 out.go:177] * [kubernetes-upgrade-171355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:58:27.035735  780237 notify.go:220] Checking for updates...
	I0729 20:58:27.036328  780237 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:58:27.038642  780237 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:58:27.041308  780237 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:58:27.043629  780237 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:58:27.045851  780237 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:58:27.048077  780237 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:58:27.049356  780237 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:58:27.086193  780237 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 20:58:27.087286  780237 start.go:297] selected driver: kvm2
	I0729 20:58:27.087300  780237 start.go:901] validating driver "kvm2" against <nil>
	I0729 20:58:27.087314  780237 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:58:27.088186  780237 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:58:27.104665  780237 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 20:58:27.122150  780237 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 20:58:27.122226  780237 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 20:58:27.122552  780237 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 20:58:27.122631  780237 cni.go:84] Creating CNI manager for ""
	I0729 20:58:27.122650  780237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:58:27.122660  780237 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 20:58:27.122739  780237 start.go:340] cluster config:
	{Name:kubernetes-upgrade-171355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:58:27.122868  780237 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 20:58:27.124416  780237 out.go:177] * Starting "kubernetes-upgrade-171355" primary control-plane node in "kubernetes-upgrade-171355" cluster
	I0729 20:58:27.125649  780237 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 20:58:27.125699  780237 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 20:58:27.125712  780237 cache.go:56] Caching tarball of preloaded images
	I0729 20:58:27.125807  780237 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 20:58:27.125825  780237 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 20:58:27.126283  780237 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/config.json ...
	I0729 20:58:27.126315  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/config.json: {Name:mk056f8efdffaee3bb829979dea183a24f205b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:58:27.126504  780237 start.go:360] acquireMachinesLock for kubernetes-upgrade-171355: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 20:58:53.168634  780237 start.go:364] duration metric: took 26.042024367s to acquireMachinesLock for "kubernetes-upgrade-171355"
	I0729 20:58:53.168764  780237 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-171355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 20:58:53.168906  780237 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 20:58:53.172153  780237 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0729 20:58:53.172454  780237 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:58:53.172543  780237 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:58:53.188904  780237 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42815
	I0729 20:58:53.189374  780237 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:58:53.189978  780237 main.go:141] libmachine: Using API Version  1
	I0729 20:58:53.190000  780237 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:58:53.190285  780237 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:58:53.190462  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetMachineName
	I0729 20:58:53.190617  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:58:53.190746  780237 start.go:159] libmachine.API.Create for "kubernetes-upgrade-171355" (driver="kvm2")
	I0729 20:58:53.190775  780237 client.go:168] LocalClient.Create starting
	I0729 20:58:53.190799  780237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 20:58:53.190829  780237 main.go:141] libmachine: Decoding PEM data...
	I0729 20:58:53.190855  780237 main.go:141] libmachine: Parsing certificate...
	I0729 20:58:53.190912  780237 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 20:58:53.190931  780237 main.go:141] libmachine: Decoding PEM data...
	I0729 20:58:53.190939  780237 main.go:141] libmachine: Parsing certificate...
	I0729 20:58:53.190957  780237 main.go:141] libmachine: Running pre-create checks...
	I0729 20:58:53.190968  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .PreCreateCheck
	I0729 20:58:53.191327  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetConfigRaw
	I0729 20:58:53.191706  780237 main.go:141] libmachine: Creating machine...
	I0729 20:58:53.191720  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Create
	I0729 20:58:53.191865  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Creating KVM machine...
	I0729 20:58:53.193035  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found existing default KVM network
	I0729 20:58:53.193951  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.193824  780583 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:55:09:01} reservation:<nil>}
	I0729 20:58:53.194772  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.194686  780583 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010fbc0}
	I0729 20:58:53.194797  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | created network xml: 
	I0729 20:58:53.194809  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | <network>
	I0729 20:58:53.194817  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   <name>mk-kubernetes-upgrade-171355</name>
	I0729 20:58:53.194834  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   <dns enable='no'/>
	I0729 20:58:53.194841  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   
	I0729 20:58:53.194860  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0729 20:58:53.194872  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |     <dhcp>
	I0729 20:58:53.194920  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0729 20:58:53.194942  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |     </dhcp>
	I0729 20:58:53.194953  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   </ip>
	I0729 20:58:53.194965  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG |   
	I0729 20:58:53.194977  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | </network>
	I0729 20:58:53.195002  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | 
	I0729 20:58:53.200250  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | trying to create private KVM network mk-kubernetes-upgrade-171355 192.168.50.0/24...
	I0729 20:58:53.271331  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | private KVM network mk-kubernetes-upgrade-171355 192.168.50.0/24 created
	I0729 20:58:53.271367  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355 ...
	I0729 20:58:53.271393  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.271311  780583 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:58:53.271418  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 20:58:53.271434  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 20:58:53.535991  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.535838  780583 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa...
	I0729 20:58:53.737204  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.737045  780583 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/kubernetes-upgrade-171355.rawdisk...
	I0729 20:58:53.737243  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Writing magic tar header
	I0729 20:58:53.737265  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Writing SSH key tar header
	I0729 20:58:53.737278  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:53.737225  780583 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355 ...
	I0729 20:58:53.737376  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355
	I0729 20:58:53.737405  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355 (perms=drwx------)
	I0729 20:58:53.737432  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 20:58:53.737447  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:58:53.737459  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 20:58:53.737472  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 20:58:53.737487  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 20:58:53.737503  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 20:58:53.737513  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 20:58:53.737529  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 20:58:53.737541  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 20:58:53.737552  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home/jenkins
	I0729 20:58:53.737560  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Creating domain...
	I0729 20:58:53.737586  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Checking permissions on dir: /home
	I0729 20:58:53.737597  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Skipping /home - not owner
	I0729 20:58:53.738742  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) define libvirt domain using xml: 
	I0729 20:58:53.738769  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) <domain type='kvm'>
	I0729 20:58:53.738792  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <name>kubernetes-upgrade-171355</name>
	I0729 20:58:53.738807  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <memory unit='MiB'>2200</memory>
	I0729 20:58:53.738820  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <vcpu>2</vcpu>
	I0729 20:58:53.738830  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <features>
	I0729 20:58:53.738839  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <acpi/>
	I0729 20:58:53.738847  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <apic/>
	I0729 20:58:53.738869  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <pae/>
	I0729 20:58:53.738883  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     
	I0729 20:58:53.738895  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   </features>
	I0729 20:58:53.738906  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <cpu mode='host-passthrough'>
	I0729 20:58:53.738914  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   
	I0729 20:58:53.738924  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   </cpu>
	I0729 20:58:53.738932  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <os>
	I0729 20:58:53.738942  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <type>hvm</type>
	I0729 20:58:53.738948  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <boot dev='cdrom'/>
	I0729 20:58:53.738955  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <boot dev='hd'/>
	I0729 20:58:53.738980  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <bootmenu enable='no'/>
	I0729 20:58:53.739004  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   </os>
	I0729 20:58:53.739017  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   <devices>
	I0729 20:58:53.739027  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <disk type='file' device='cdrom'>
	I0729 20:58:53.739044  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/boot2docker.iso'/>
	I0729 20:58:53.739056  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <target dev='hdc' bus='scsi'/>
	I0729 20:58:53.739065  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <readonly/>
	I0729 20:58:53.739084  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </disk>
	I0729 20:58:53.739113  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <disk type='file' device='disk'>
	I0729 20:58:53.739132  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 20:58:53.739164  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/kubernetes-upgrade-171355.rawdisk'/>
	I0729 20:58:53.739181  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <target dev='hda' bus='virtio'/>
	I0729 20:58:53.739190  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </disk>
	I0729 20:58:53.739199  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <interface type='network'>
	I0729 20:58:53.739213  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <source network='mk-kubernetes-upgrade-171355'/>
	I0729 20:58:53.739220  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <model type='virtio'/>
	I0729 20:58:53.739231  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </interface>
	I0729 20:58:53.739242  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <interface type='network'>
	I0729 20:58:53.739255  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <source network='default'/>
	I0729 20:58:53.739268  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <model type='virtio'/>
	I0729 20:58:53.739279  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </interface>
	I0729 20:58:53.739296  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <serial type='pty'>
	I0729 20:58:53.739308  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <target port='0'/>
	I0729 20:58:53.739321  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </serial>
	I0729 20:58:53.739336  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <console type='pty'>
	I0729 20:58:53.739349  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <target type='serial' port='0'/>
	I0729 20:58:53.739359  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </console>
	I0729 20:58:53.739368  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     <rng model='virtio'>
	I0729 20:58:53.739379  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)       <backend model='random'>/dev/random</backend>
	I0729 20:58:53.739385  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     </rng>
	I0729 20:58:53.739395  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     
	I0729 20:58:53.739414  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)     
	I0729 20:58:53.739434  780237 main.go:141] libmachine: (kubernetes-upgrade-171355)   </devices>
	I0729 20:58:53.739443  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) </domain>
	I0729 20:58:53.739452  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) 
	I0729 20:58:53.743842  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:f8:af:5a in network default
	I0729 20:58:53.744470  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Ensuring networks are active...
	I0729 20:58:53.744491  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:53.745183  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Ensuring network default is active
	I0729 20:58:53.745555  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Ensuring network mk-kubernetes-upgrade-171355 is active
	I0729 20:58:53.746124  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Getting domain xml...
	I0729 20:58:53.746786  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Creating domain...
	I0729 20:58:55.084209  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Waiting to get IP...
	I0729 20:58:55.085328  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.085877  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.085939  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:55.085848  780583 retry.go:31] will retry after 237.966065ms: waiting for machine to come up
	I0729 20:58:55.325573  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.326037  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.326067  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:55.325994  780583 retry.go:31] will retry after 363.579823ms: waiting for machine to come up
	I0729 20:58:55.691846  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.692534  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:55.692624  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:55.692492  780583 retry.go:31] will retry after 385.061705ms: waiting for machine to come up
	I0729 20:58:56.079234  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:56.079905  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:56.079938  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:56.079849  780583 retry.go:31] will retry after 463.79545ms: waiting for machine to come up
	I0729 20:58:56.545314  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:56.545863  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:56.545894  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:56.545801  780583 retry.go:31] will retry after 524.995159ms: waiting for machine to come up
	I0729 20:58:57.072781  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:57.073255  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:57.073285  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:57.073197  780583 retry.go:31] will retry after 696.96752ms: waiting for machine to come up
	I0729 20:58:57.772063  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:57.772641  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:57.772660  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:57.772562  780583 retry.go:31] will retry after 1.123069532s: waiting for machine to come up
	I0729 20:58:58.896982  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:58.897393  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:58.897424  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:58.897330  780583 retry.go:31] will retry after 1.048418078s: waiting for machine to come up
	I0729 20:58:59.947862  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:58:59.948461  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:58:59.948497  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:58:59.948390  780583 retry.go:31] will retry after 1.488085182s: waiting for machine to come up
	I0729 20:59:01.439004  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:01.439434  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:59:01.439466  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:59:01.439372  780583 retry.go:31] will retry after 1.609900749s: waiting for machine to come up
	I0729 20:59:03.051346  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:03.051819  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:59:03.051844  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:59:03.051787  780583 retry.go:31] will retry after 2.078318816s: waiting for machine to come up
	I0729 20:59:05.132529  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:05.132882  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:59:05.132926  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:59:05.132835  780583 retry.go:31] will retry after 2.543409255s: waiting for machine to come up
	I0729 20:59:07.677956  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:07.678365  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:59:07.678394  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:59:07.678317  780583 retry.go:31] will retry after 4.423306396s: waiting for machine to come up
	I0729 20:59:12.106162  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:12.106737  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find current IP address of domain kubernetes-upgrade-171355 in network mk-kubernetes-upgrade-171355
	I0729 20:59:12.106766  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | I0729 20:59:12.106680  780583 retry.go:31] will retry after 5.571085769s: waiting for machine to come up
	I0729 20:59:17.681164  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.681828  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Found IP for machine: 192.168.50.242
	I0729 20:59:17.681859  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Reserving static IP address...
	I0729 20:59:17.681874  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has current primary IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.682348  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-171355", mac: "52:54:00:53:aa:dd", ip: "192.168.50.242"} in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.764963  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Reserved static IP address: 192.168.50.242
	I0729 20:59:17.764991  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Waiting for SSH to be available...
	I0729 20:59:17.765001  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Getting to WaitForSSH function...
	I0729 20:59:17.767550  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.767915  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:17.767946  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.768074  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Using SSH client type: external
	I0729 20:59:17.768090  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Using SSH private key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa (-rw-------)
	I0729 20:59:17.768121  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.242 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0729 20:59:17.768140  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | About to run SSH command:
	I0729 20:59:17.768158  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | exit 0
	I0729 20:59:17.892164  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | SSH cmd err, output: <nil>: 
	I0729 20:59:17.892468  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) KVM machine creation complete!
	I0729 20:59:17.892887  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetConfigRaw
	I0729 20:59:17.893455  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:17.893655  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:17.893856  780237 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0729 20:59:17.893872  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetState
	I0729 20:59:17.895334  780237 main.go:141] libmachine: Detecting operating system of created instance...
	I0729 20:59:17.895347  780237 main.go:141] libmachine: Waiting for SSH to be available...
	I0729 20:59:17.895352  780237 main.go:141] libmachine: Getting to WaitForSSH function...
	I0729 20:59:17.895358  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:17.897719  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.898128  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:17.898161  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:17.898321  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:17.898532  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:17.898716  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:17.898886  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:17.899076  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:17.899268  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:17.899278  780237 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0729 20:59:18.003285  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:59:18.003328  780237 main.go:141] libmachine: Detecting the provisioner...
	I0729 20:59:18.003337  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.006368  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.006827  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.006859  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.006997  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.007245  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.007435  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.007625  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.007834  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:18.008004  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:18.008015  780237 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0729 20:59:18.112633  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0729 20:59:18.112730  780237 main.go:141] libmachine: found compatible host: buildroot
	I0729 20:59:18.112740  780237 main.go:141] libmachine: Provisioning with buildroot...
	I0729 20:59:18.112749  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetMachineName
	I0729 20:59:18.113048  780237 buildroot.go:166] provisioning hostname "kubernetes-upgrade-171355"
	I0729 20:59:18.113097  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetMachineName
	I0729 20:59:18.113320  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.115625  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.116025  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.116069  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.116293  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.116497  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.116696  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.116836  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.116996  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:18.117169  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:18.117183  780237 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-171355 && echo "kubernetes-upgrade-171355" | sudo tee /etc/hostname
	I0729 20:59:18.234059  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-171355
	
	I0729 20:59:18.234092  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.237586  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.237997  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.238035  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.238240  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.238500  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.238700  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.238889  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.239095  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:18.239284  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:18.239307  780237 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-171355' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-171355/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-171355' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 20:59:18.357714  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 20:59:18.357746  780237 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 20:59:18.357819  780237 buildroot.go:174] setting up certificates
	I0729 20:59:18.357836  780237 provision.go:84] configureAuth start
	I0729 20:59:18.357853  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetMachineName
	I0729 20:59:18.358108  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetIP
	I0729 20:59:18.360922  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.361309  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.361345  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.361625  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.363650  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.363960  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.363988  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.364149  780237 provision.go:143] copyHostCerts
	I0729 20:59:18.364232  780237 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 20:59:18.364245  780237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 20:59:18.364310  780237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 20:59:18.364473  780237 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 20:59:18.364487  780237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 20:59:18.364521  780237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 20:59:18.364613  780237 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 20:59:18.364624  780237 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 20:59:18.364653  780237 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 20:59:18.364733  780237 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-171355 san=[127.0.0.1 192.168.50.242 kubernetes-upgrade-171355 localhost minikube]
	I0729 20:59:18.549444  780237 provision.go:177] copyRemoteCerts
	I0729 20:59:18.549508  780237 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 20:59:18.549544  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.552469  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.552893  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.552918  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.553141  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.553330  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.553491  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.553673  780237 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 20:59:18.634113  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 20:59:18.660969  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 20:59:18.683370  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0729 20:59:18.706878  780237 provision.go:87] duration metric: took 349.025734ms to configureAuth
	I0729 20:59:18.706914  780237 buildroot.go:189] setting minikube options for container-runtime
	I0729 20:59:18.707117  780237 config.go:182] Loaded profile config "kubernetes-upgrade-171355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0729 20:59:18.707208  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.710068  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.710429  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.710460  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.710665  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.710897  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.711061  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.711174  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.711407  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:18.711655  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:18.711678  780237 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 20:59:18.981447  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 20:59:18.981482  780237 main.go:141] libmachine: Checking connection to Docker...
	I0729 20:59:18.981496  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetURL
	I0729 20:59:18.982844  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Using libvirt version 6000000
	I0729 20:59:18.985440  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.985918  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.985954  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.986179  780237 main.go:141] libmachine: Docker is up and running!
	I0729 20:59:18.986193  780237 main.go:141] libmachine: Reticulating splines...
	I0729 20:59:18.986200  780237 client.go:171] duration metric: took 25.795417148s to LocalClient.Create
	I0729 20:59:18.986225  780237 start.go:167] duration metric: took 25.795487954s to libmachine.API.Create "kubernetes-upgrade-171355"
	I0729 20:59:18.986238  780237 start.go:293] postStartSetup for "kubernetes-upgrade-171355" (driver="kvm2")
	I0729 20:59:18.986254  780237 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 20:59:18.986276  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:18.986599  780237 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 20:59:18.986626  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:18.989665  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.990211  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:18.990238  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:18.990461  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:18.990663  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:18.990826  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:18.990978  780237 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 20:59:19.070190  780237 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 20:59:19.074812  780237 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 20:59:19.074842  780237 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 20:59:19.074935  780237 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 20:59:19.075040  780237 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 20:59:19.075185  780237 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 20:59:19.084384  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:59:19.108479  780237 start.go:296] duration metric: took 122.221215ms for postStartSetup
	I0729 20:59:19.108546  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetConfigRaw
	I0729 20:59:19.109212  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetIP
	I0729 20:59:19.112092  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.112371  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:19.112399  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.112712  780237 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/config.json ...
	I0729 20:59:19.112898  780237 start.go:128] duration metric: took 25.943975067s to createHost
	I0729 20:59:19.112919  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:19.115443  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.115814  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:19.115850  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.115986  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:19.116231  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:19.116424  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:19.116692  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:19.116885  780237 main.go:141] libmachine: Using SSH client type: native
	I0729 20:59:19.117073  780237 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.50.242 22 <nil> <nil>}
	I0729 20:59:19.117083  780237 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 20:59:19.220433  780237 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722286759.199355518
	
	I0729 20:59:19.220461  780237 fix.go:216] guest clock: 1722286759.199355518
	I0729 20:59:19.220471  780237 fix.go:229] Guest: 2024-07-29 20:59:19.199355518 +0000 UTC Remote: 2024-07-29 20:59:19.112908364 +0000 UTC m=+52.128387676 (delta=86.447154ms)
	I0729 20:59:19.220492  780237 fix.go:200] guest clock delta is within tolerance: 86.447154ms
	I0729 20:59:19.220497  780237 start.go:83] releasing machines lock for "kubernetes-upgrade-171355", held for 26.051792082s
	I0729 20:59:19.220521  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:19.220813  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetIP
	I0729 20:59:19.224192  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.224663  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:19.224700  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.224823  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:19.225470  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:19.225664  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 20:59:19.225781  780237 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 20:59:19.225834  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:19.225870  780237 ssh_runner.go:195] Run: cat /version.json
	I0729 20:59:19.225894  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 20:59:19.228779  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.228886  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.229089  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:19.229123  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.229242  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:19.229274  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:19.229311  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:19.229491  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 20:59:19.229537  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:19.229638  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:19.229724  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 20:59:19.229807  780237 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 20:59:19.229854  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 20:59:19.229997  780237 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 20:59:19.343940  780237 ssh_runner.go:195] Run: systemctl --version
	I0729 20:59:19.349914  780237 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 20:59:19.513457  780237 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 20:59:19.520038  780237 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 20:59:19.520151  780237 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 20:59:19.535226  780237 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0729 20:59:19.535260  780237 start.go:495] detecting cgroup driver to use...
	I0729 20:59:19.535347  780237 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 20:59:19.558474  780237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 20:59:19.574048  780237 docker.go:216] disabling cri-docker service (if available) ...
	I0729 20:59:19.574106  780237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 20:59:19.588804  780237 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 20:59:19.603156  780237 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 20:59:19.729548  780237 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 20:59:19.877782  780237 docker.go:232] disabling docker service ...
	I0729 20:59:19.877895  780237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 20:59:19.893128  780237 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 20:59:19.906464  780237 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 20:59:20.049149  780237 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 20:59:20.197390  780237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 20:59:20.211797  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 20:59:20.232500  780237 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0729 20:59:20.232564  780237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:59:20.244133  780237 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 20:59:20.244214  780237 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:59:20.255742  780237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:59:20.265849  780237 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 20:59:20.275724  780237 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 20:59:20.285805  780237 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 20:59:20.294403  780237 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0729 20:59:20.294463  780237 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0729 20:59:20.307514  780237 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 20:59:20.316566  780237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:59:20.434416  780237 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 20:59:20.585758  780237 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 20:59:20.585847  780237 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 20:59:20.590568  780237 start.go:563] Will wait 60s for crictl version
	I0729 20:59:20.590624  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:20.595159  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 20:59:20.635706  780237 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 20:59:20.635806  780237 ssh_runner.go:195] Run: crio --version
	I0729 20:59:20.673066  780237 ssh_runner.go:195] Run: crio --version
	I0729 20:59:20.709421  780237 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0729 20:59:20.710534  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetIP
	I0729 20:59:20.714170  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:20.714677  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 20:59:20.714716  780237 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 20:59:20.714916  780237 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0729 20:59:20.721233  780237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:59:20.737308  780237 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-171355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-171355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 20:59:20.737446  780237 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 20:59:20.737517  780237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:59:20.780161  780237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 20:59:20.780242  780237 ssh_runner.go:195] Run: which lz4
	I0729 20:59:20.784147  780237 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0729 20:59:20.789201  780237 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0729 20:59:20.789236  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0729 20:59:22.207023  780237 crio.go:462] duration metric: took 1.422907978s to copy over tarball
	I0729 20:59:22.207102  780237 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0729 20:59:25.089711  780237 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.882530987s)
	I0729 20:59:25.089754  780237 crio.go:469] duration metric: took 2.882701453s to extract the tarball
	I0729 20:59:25.089766  780237 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0729 20:59:25.139230  780237 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 20:59:25.186157  780237 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0729 20:59:25.186194  780237 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0729 20:59:25.186261  780237 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:59:25.186308  780237 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 20:59:25.186339  780237 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 20:59:25.186396  780237 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0729 20:59:25.186349  780237 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 20:59:25.186302  780237 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 20:59:25.186388  780237 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0729 20:59:25.186322  780237 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0729 20:59:25.187874  780237 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 20:59:25.187973  780237 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0729 20:59:25.187998  780237 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0729 20:59:25.188019  780237 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 20:59:25.188073  780237 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0729 20:59:25.187879  780237 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 20:59:25.188098  780237 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 20:59:25.188191  780237 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:59:25.429652  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0729 20:59:25.436754  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0729 20:59:25.437163  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 20:59:25.445298  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0729 20:59:25.458834  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0729 20:59:25.486195  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0729 20:59:25.487461  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0729 20:59:25.497491  780237 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0729 20:59:25.497546  780237 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0729 20:59:25.497592  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.593715  780237 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0729 20:59:25.593772  780237 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0729 20:59:25.593831  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.593974  780237 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0729 20:59:25.594005  780237 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 20:59:25.594038  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.594137  780237 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0729 20:59:25.594163  780237 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0729 20:59:25.594189  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.625849  780237 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0729 20:59:25.625893  780237 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0729 20:59:25.625965  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.625964  780237 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0729 20:59:25.626000  780237 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0729 20:59:25.626031  780237 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0729 20:59:25.626044  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.626067  780237 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0729 20:59:25.626089  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0729 20:59:25.626171  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0729 20:59:25.626184  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0729 20:59:25.626110  780237 ssh_runner.go:195] Run: which crictl
	I0729 20:59:25.626230  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0729 20:59:25.630195  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0729 20:59:25.719686  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0729 20:59:25.719702  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0729 20:59:25.719755  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0729 20:59:25.731169  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0729 20:59:25.731177  780237 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0729 20:59:25.731238  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0729 20:59:25.743646  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0729 20:59:25.783654  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0729 20:59:25.783716  780237 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0729 20:59:26.088300  780237 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 20:59:26.227707  780237 cache_images.go:92] duration metric: took 1.04148953s to LoadCachedImages
	W0729 20:59:26.227831  780237 out.go:239] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19344-733808/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0729 20:59:26.227868  780237 kubeadm.go:934] updating node { 192.168.50.242 8443 v1.20.0 crio true true} ...
	I0729 20:59:26.228015  780237 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-171355 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.242
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-171355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 20:59:26.228123  780237 ssh_runner.go:195] Run: crio config
	I0729 20:59:26.277890  780237 cni.go:84] Creating CNI manager for ""
	I0729 20:59:26.277911  780237 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 20:59:26.277921  780237 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 20:59:26.277939  780237 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.242 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-171355 NodeName:kubernetes-upgrade-171355 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.242"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.242 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 20:59:26.278132  780237 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.242
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-171355"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.242
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.242"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 20:59:26.278213  780237 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 20:59:26.288511  780237 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 20:59:26.288577  780237 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 20:59:26.297537  780237 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0729 20:59:26.313265  780237 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 20:59:26.332045  780237 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0729 20:59:26.351072  780237 ssh_runner.go:195] Run: grep 192.168.50.242	control-plane.minikube.internal$ /etc/hosts
	I0729 20:59:26.354936  780237 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.242	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 20:59:26.367902  780237 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 20:59:26.503090  780237 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 20:59:26.522021  780237 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355 for IP: 192.168.50.242
	I0729 20:59:26.522048  780237 certs.go:194] generating shared ca certs ...
	I0729 20:59:26.522072  780237 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:26.522279  780237 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 20:59:26.522325  780237 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 20:59:26.522337  780237 certs.go:256] generating profile certs ...
	I0729 20:59:26.522407  780237 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.key
	I0729 20:59:26.522426  780237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.crt with IP's: []
	I0729 20:59:26.761606  780237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.crt ...
	I0729 20:59:26.761650  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.crt: {Name:mkcd6630b6785a6506de080159940977f831bfef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:26.761863  780237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.key ...
	I0729 20:59:26.761889  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.key: {Name:mkd8c235ac737d9b64731274bf8a4494455cccd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:26.762009  780237 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key.8896f151
	I0729 20:59:26.762037  780237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt.8896f151 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.242]
	I0729 20:59:26.939904  780237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt.8896f151 ...
	I0729 20:59:26.939942  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt.8896f151: {Name:mkd63842e8256985556567f9cd9b60161c905d92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:26.940152  780237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key.8896f151 ...
	I0729 20:59:26.940184  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key.8896f151: {Name:mk9b90788a8dfe71aeeef605826511b0c0397fcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:26.940304  780237 certs.go:381] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt.8896f151 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt
	I0729 20:59:26.940411  780237 certs.go:385] copying /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key.8896f151 -> /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key
	I0729 20:59:26.940476  780237 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.key
	I0729 20:59:26.940492  780237 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.crt with IP's: []
	I0729 20:59:27.249139  780237 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.crt ...
	I0729 20:59:27.249181  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.crt: {Name:mkfdeaef034cc5eaea822f86f12aa001382a1120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:27.249402  780237 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.key ...
	I0729 20:59:27.249421  780237 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.key: {Name:mk251914f7cdb21214a3ba9025636b32e8bfc50c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 20:59:27.249616  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 20:59:27.249658  780237 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 20:59:27.249671  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 20:59:27.249709  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 20:59:27.249754  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 20:59:27.249791  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 20:59:27.249858  780237 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 20:59:27.250488  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 20:59:27.281166  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 20:59:27.308629  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 20:59:27.337720  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 20:59:27.365084  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0729 20:59:27.395978  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0729 20:59:27.435334  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 20:59:27.461265  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0729 20:59:27.483495  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 20:59:27.507243  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 20:59:27.533527  780237 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 20:59:27.556573  780237 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 20:59:27.573800  780237 ssh_runner.go:195] Run: openssl version
	I0729 20:59:27.579674  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 20:59:27.589917  780237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 20:59:27.594425  780237 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 20:59:27.594502  780237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 20:59:27.600660  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 20:59:27.610964  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 20:59:27.621644  780237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 20:59:27.625948  780237 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 20:59:27.626010  780237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 20:59:27.631379  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 20:59:27.641210  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 20:59:27.651132  780237 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:59:27.655693  780237 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:59:27.655765  780237 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 20:59:27.661635  780237 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 20:59:27.672248  780237 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 20:59:27.676303  780237 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 20:59:27.676376  780237 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-171355 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-171355 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:59:27.676462  780237 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 20:59:27.676512  780237 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 20:59:27.714099  780237 cri.go:89] found id: ""
	I0729 20:59:27.714196  780237 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 20:59:27.723390  780237 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 20:59:27.733228  780237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 20:59:27.743839  780237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 20:59:27.743864  780237 kubeadm.go:157] found existing configuration files:
	
	I0729 20:59:27.743916  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 20:59:27.752652  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 20:59:27.752715  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 20:59:27.764492  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 20:59:27.775985  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 20:59:27.776074  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 20:59:27.785333  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 20:59:27.794403  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 20:59:27.794469  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 20:59:27.803871  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 20:59:27.812487  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 20:59:27.812558  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 20:59:27.821606  780237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 20:59:27.938947  780237 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 20:59:27.939100  780237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 20:59:28.110802  780237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 20:59:28.110977  780237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 20:59:28.111139  780237 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 20:59:28.336883  780237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 20:59:28.471543  780237 out.go:204]   - Generating certificates and keys ...
	I0729 20:59:28.471686  780237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 20:59:28.471817  780237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 20:59:28.471945  780237 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 20:59:28.579957  780237 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 20:59:28.783878  780237 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 20:59:28.892841  780237 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 20:59:29.071324  780237 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 20:59:29.071767  780237 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	I0729 20:59:29.137991  780237 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 20:59:29.138140  780237 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	I0729 20:59:29.457067  780237 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 20:59:29.870678  780237 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 20:59:30.023684  780237 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 20:59:30.023941  780237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 20:59:30.181611  780237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 20:59:30.556182  780237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 20:59:30.659126  780237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 20:59:30.768184  780237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 20:59:30.783140  780237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 20:59:30.785785  780237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 20:59:30.785862  780237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 20:59:30.917602  780237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 20:59:30.919363  780237 out.go:204]   - Booting up control plane ...
	I0729 20:59:30.919500  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 20:59:30.922888  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 20:59:30.925166  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 20:59:30.926417  780237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 20:59:30.932541  780237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 21:00:10.928468  780237 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 21:00:10.929201  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:00:10.929445  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:00:15.929393  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:00:15.929671  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:00:25.928693  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:00:25.928986  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:00:45.928247  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:00:45.928509  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:01:25.929797  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:01:25.930090  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:01:25.930118  780237 kubeadm.go:310] 
	I0729 21:01:25.930167  780237 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 21:01:25.930307  780237 kubeadm.go:310] 		timed out waiting for the condition
	I0729 21:01:25.930325  780237 kubeadm.go:310] 
	I0729 21:01:25.930368  780237 kubeadm.go:310] 	This error is likely caused by:
	I0729 21:01:25.930418  780237 kubeadm.go:310] 		- The kubelet is not running
	I0729 21:01:25.930542  780237 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 21:01:25.930553  780237 kubeadm.go:310] 
	I0729 21:01:25.930676  780237 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 21:01:25.930714  780237 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 21:01:25.930751  780237 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 21:01:25.930758  780237 kubeadm.go:310] 
	I0729 21:01:25.930886  780237 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 21:01:25.930983  780237 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 21:01:25.930992  780237 kubeadm.go:310] 
	I0729 21:01:25.931109  780237 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 21:01:25.931211  780237 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 21:01:25.931298  780237 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 21:01:25.931385  780237 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 21:01:25.931392  780237 kubeadm.go:310] 
	I0729 21:01:25.932704  780237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 21:01:25.932852  780237 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 21:01:25.933035  780237 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W0729 21:01:25.933118  780237 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-171355 localhost] and IPs [192.168.50.242 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0729 21:01:25.933188  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0729 21:01:26.581701  780237 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 21:01:26.599996  780237 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 21:01:26.613141  780237 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 21:01:26.613171  780237 kubeadm.go:157] found existing configuration files:
	
	I0729 21:01:26.613223  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 21:01:26.627738  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 21:01:26.627823  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 21:01:26.638901  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 21:01:26.648188  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 21:01:26.648254  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 21:01:26.658192  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 21:01:26.668010  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 21:01:26.668095  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 21:01:26.679718  780237 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 21:01:26.694307  780237 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 21:01:26.694390  780237 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 21:01:26.707144  780237 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0729 21:01:26.795401  780237 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0729 21:01:26.795485  780237 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 21:01:26.976577  780237 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 21:01:26.976732  780237 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 21:01:26.976853  780237 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 21:01:27.187517  780237 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 21:01:27.189404  780237 out.go:204]   - Generating certificates and keys ...
	I0729 21:01:27.189556  780237 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 21:01:27.189688  780237 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 21:01:27.189850  780237 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0729 21:01:27.189950  780237 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0729 21:01:27.190041  780237 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0729 21:01:27.190115  780237 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0729 21:01:27.190207  780237 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0729 21:01:27.190492  780237 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0729 21:01:27.190906  780237 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0729 21:01:27.191395  780237 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0729 21:01:27.191473  780237 kubeadm.go:310] [certs] Using the existing "sa" key
	I0729 21:01:27.191567  780237 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 21:01:27.363788  780237 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 21:01:27.533595  780237 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 21:01:27.937923  780237 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 21:01:28.142744  780237 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 21:01:28.157317  780237 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 21:01:28.159123  780237 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 21:01:28.159221  780237 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 21:01:28.303560  780237 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 21:01:28.305745  780237 out.go:204]   - Booting up control plane ...
	I0729 21:01:28.305894  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 21:01:28.306404  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 21:01:28.308935  780237 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 21:01:28.310038  780237 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 21:01:28.315471  780237 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0729 21:02:08.319075  780237 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I0729 21:02:08.319193  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:02:08.319463  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:02:13.320221  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:02:13.320537  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:02:23.320682  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:02:23.320920  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:02:43.319951  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:02:43.320182  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:03:23.319974  780237 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0729 21:03:23.320287  780237 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0729 21:03:23.320321  780237 kubeadm.go:310] 
	I0729 21:03:23.320371  780237 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I0729 21:03:23.320463  780237 kubeadm.go:310] 		timed out waiting for the condition
	I0729 21:03:23.320481  780237 kubeadm.go:310] 
	I0729 21:03:23.320525  780237 kubeadm.go:310] 	This error is likely caused by:
	I0729 21:03:23.320577  780237 kubeadm.go:310] 		- The kubelet is not running
	I0729 21:03:23.320726  780237 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0729 21:03:23.320742  780237 kubeadm.go:310] 
	I0729 21:03:23.320905  780237 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0729 21:03:23.320962  780237 kubeadm.go:310] 		- 'systemctl status kubelet'
	I0729 21:03:23.321019  780237 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I0729 21:03:23.321029  780237 kubeadm.go:310] 
	I0729 21:03:23.321179  780237 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0729 21:03:23.321302  780237 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0729 21:03:23.321314  780237 kubeadm.go:310] 
	I0729 21:03:23.321485  780237 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0729 21:03:23.321582  780237 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0729 21:03:23.321689  780237 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I0729 21:03:23.321794  780237 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0729 21:03:23.321809  780237 kubeadm.go:310] 
	I0729 21:03:23.322679  780237 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 21:03:23.322774  780237 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0729 21:03:23.322858  780237 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I0729 21:03:23.322949  780237 kubeadm.go:394] duration metric: took 3m55.646580443s to StartCluster
	I0729 21:03:23.323024  780237 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0729 21:03:23.323096  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 21:03:23.368486  780237 cri.go:89] found id: ""
	I0729 21:03:23.368525  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.368538  780237 logs.go:278] No container was found matching "kube-apiserver"
	I0729 21:03:23.368546  780237 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0729 21:03:23.368645  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 21:03:23.420747  780237 cri.go:89] found id: ""
	I0729 21:03:23.420780  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.420791  780237 logs.go:278] No container was found matching "etcd"
	I0729 21:03:23.420801  780237 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0729 21:03:23.420877  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 21:03:23.464490  780237 cri.go:89] found id: ""
	I0729 21:03:23.464530  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.464543  780237 logs.go:278] No container was found matching "coredns"
	I0729 21:03:23.464552  780237 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0729 21:03:23.464620  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 21:03:23.500432  780237 cri.go:89] found id: ""
	I0729 21:03:23.500463  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.500473  780237 logs.go:278] No container was found matching "kube-scheduler"
	I0729 21:03:23.500480  780237 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0729 21:03:23.500548  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 21:03:23.541716  780237 cri.go:89] found id: ""
	I0729 21:03:23.541752  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.541776  780237 logs.go:278] No container was found matching "kube-proxy"
	I0729 21:03:23.541785  780237 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 21:03:23.541860  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 21:03:23.579156  780237 cri.go:89] found id: ""
	I0729 21:03:23.579188  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.579200  780237 logs.go:278] No container was found matching "kube-controller-manager"
	I0729 21:03:23.579208  780237 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0729 21:03:23.579276  780237 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 21:03:23.613700  780237 cri.go:89] found id: ""
	I0729 21:03:23.613737  780237 logs.go:276] 0 containers: []
	W0729 21:03:23.613749  780237 logs.go:278] No container was found matching "kindnet"
	I0729 21:03:23.613762  780237 logs.go:123] Gathering logs for dmesg ...
	I0729 21:03:23.613779  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 21:03:23.628831  780237 logs.go:123] Gathering logs for describe nodes ...
	I0729 21:03:23.628867  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0729 21:03:23.782891  780237 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0729 21:03:23.782918  780237 logs.go:123] Gathering logs for CRI-O ...
	I0729 21:03:23.782936  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0729 21:03:23.916704  780237 logs.go:123] Gathering logs for container status ...
	I0729 21:03:23.916764  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 21:03:23.967544  780237 logs.go:123] Gathering logs for kubelet ...
	I0729 21:03:23.967583  780237 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 21:03:24.039989  780237 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0729 21:03:24.040070  780237 out.go:239] * 
	* 
	W0729 21:03:24.040145  780237 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 21:03:24.040178  780237 out.go:239] * 
	* 
	W0729 21:03:24.041453  780237 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 21:03:24.044990  780237 out.go:177] 
	W0729 21:03:24.046174  780237 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0729 21:03:24.046237  780237 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0729 21:03:24.046264  780237 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0729 21:03:24.047746  780237 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-171355
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-171355: (2.318821848s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171355 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-171355 status --format={{.Host}}: exit status 7 (76.63485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (45.798131096s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-171355 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (89.050229ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-171355] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-171355
	    minikube start -p kubernetes-upgrade-171355 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1713552 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-171355 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-171355 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m15.753838331s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-07-29 21:06:28.216462429 +0000 UTC m=+6192.385169609
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-171355 -n kubernetes-upgrade-171355
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-171355 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-171355 logs -n 25: (1.539481195s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo journalctl                       | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo docker                           | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo                                  | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo cat                              | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo containerd                       | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo systemctl                        | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo find                             | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-404553 sudo crio                             | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-404553                                       | auto-404553   | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC | 29 Jul 24 21:06 UTC |
	| start   | -p calico-404553 --memory=3072                       | calico-404553 | jenkins | v1.33.1 | 29 Jul 24 21:06 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 21:06:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 21:06:14.276552  790394 out.go:291] Setting OutFile to fd 1 ...
	I0729 21:06:14.276651  790394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:06:14.276659  790394 out.go:304] Setting ErrFile to fd 2...
	I0729 21:06:14.276664  790394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:06:14.276832  790394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 21:06:14.277415  790394 out.go:298] Setting JSON to false
	I0729 21:06:14.278561  790394 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":17321,"bootTime":1722269853,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 21:06:14.278632  790394 start.go:139] virtualization: kvm guest
	I0729 21:06:14.280540  790394 out.go:177] * [calico-404553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 21:06:14.281931  790394 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 21:06:14.281975  790394 notify.go:220] Checking for updates...
	I0729 21:06:14.284545  790394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 21:06:14.285733  790394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:06:14.286882  790394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:06:14.288084  790394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 21:06:14.289879  790394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 21:06:14.291773  790394 config.go:182] Loaded profile config "cert-expiration-461577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:06:14.291864  790394 config.go:182] Loaded profile config "kindnet-404553": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:06:14.291941  790394 config.go:182] Loaded profile config "kubernetes-upgrade-171355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 21:06:14.292103  790394 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 21:06:14.332972  790394 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 21:06:14.334142  790394 start.go:297] selected driver: kvm2
	I0729 21:06:14.334171  790394 start.go:901] validating driver "kvm2" against <nil>
	I0729 21:06:14.334193  790394 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 21:06:14.335247  790394 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:06:14.335365  790394 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 21:06:14.352565  790394 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 21:06:14.352621  790394 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 21:06:14.352873  790394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:06:14.352935  790394 cni.go:84] Creating CNI manager for "calico"
	I0729 21:06:14.352946  790394 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0729 21:06:14.353009  790394 start.go:340] cluster config:
	{Name:calico-404553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:06:14.353100  790394 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:06:14.354852  790394 out.go:177] * Starting "calico-404553" primary control-plane node in "calico-404553" cluster
	I0729 21:06:14.356160  790394 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 21:06:14.356209  790394 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 21:06:14.356221  790394 cache.go:56] Caching tarball of preloaded images
	I0729 21:06:14.356308  790394 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 21:06:14.356320  790394 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 21:06:14.356423  790394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/calico-404553/config.json ...
	I0729 21:06:14.356442  790394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/calico-404553/config.json: {Name:mk7b96d71ec642c92497d6ea1259113acea2b6ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:06:14.356619  790394 start.go:360] acquireMachinesLock for calico-404553: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:06:14.356660  790394 start.go:364] duration metric: took 22.7µs to acquireMachinesLock for "calico-404553"
	I0729 21:06:14.356679  790394 start.go:93] Provisioning new machine with config: &{Name:calico-404553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:calico-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 21:06:14.356738  790394 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 21:06:12.734443  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:15.233805  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:14.358160  790394 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0729 21:06:14.358296  790394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:14.358333  790394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:14.375751  790394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40011
	I0729 21:06:14.376469  790394 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:14.377165  790394 main.go:141] libmachine: Using API Version  1
	I0729 21:06:14.377187  790394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:14.377595  790394 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:14.377845  790394 main.go:141] libmachine: (calico-404553) Calling .GetMachineName
	I0729 21:06:14.378061  790394 main.go:141] libmachine: (calico-404553) Calling .DriverName
	I0729 21:06:14.378228  790394 start.go:159] libmachine.API.Create for "calico-404553" (driver="kvm2")
	I0729 21:06:14.378262  790394 client.go:168] LocalClient.Create starting
	I0729 21:06:14.378300  790394 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem
	I0729 21:06:14.378352  790394 main.go:141] libmachine: Decoding PEM data...
	I0729 21:06:14.378375  790394 main.go:141] libmachine: Parsing certificate...
	I0729 21:06:14.378454  790394 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem
	I0729 21:06:14.378481  790394 main.go:141] libmachine: Decoding PEM data...
	I0729 21:06:14.378500  790394 main.go:141] libmachine: Parsing certificate...
	I0729 21:06:14.378527  790394 main.go:141] libmachine: Running pre-create checks...
	I0729 21:06:14.378546  790394 main.go:141] libmachine: (calico-404553) Calling .PreCreateCheck
	I0729 21:06:14.378924  790394 main.go:141] libmachine: (calico-404553) Calling .GetConfigRaw
	I0729 21:06:14.379379  790394 main.go:141] libmachine: Creating machine...
	I0729 21:06:14.379398  790394 main.go:141] libmachine: (calico-404553) Calling .Create
	I0729 21:06:14.379520  790394 main.go:141] libmachine: (calico-404553) Creating KVM machine...
	I0729 21:06:14.381265  790394 main.go:141] libmachine: (calico-404553) DBG | found existing default KVM network
	I0729 21:06:14.383016  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:14.382862  790417 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f970}
	I0729 21:06:14.383052  790394 main.go:141] libmachine: (calico-404553) DBG | created network xml: 
	I0729 21:06:14.383073  790394 main.go:141] libmachine: (calico-404553) DBG | <network>
	I0729 21:06:14.383086  790394 main.go:141] libmachine: (calico-404553) DBG |   <name>mk-calico-404553</name>
	I0729 21:06:14.383095  790394 main.go:141] libmachine: (calico-404553) DBG |   <dns enable='no'/>
	I0729 21:06:14.383103  790394 main.go:141] libmachine: (calico-404553) DBG |   
	I0729 21:06:14.383114  790394 main.go:141] libmachine: (calico-404553) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0729 21:06:14.383125  790394 main.go:141] libmachine: (calico-404553) DBG |     <dhcp>
	I0729 21:06:14.383133  790394 main.go:141] libmachine: (calico-404553) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0729 21:06:14.383144  790394 main.go:141] libmachine: (calico-404553) DBG |     </dhcp>
	I0729 21:06:14.383161  790394 main.go:141] libmachine: (calico-404553) DBG |   </ip>
	I0729 21:06:14.383170  790394 main.go:141] libmachine: (calico-404553) DBG |   
	I0729 21:06:14.383181  790394 main.go:141] libmachine: (calico-404553) DBG | </network>
	I0729 21:06:14.383220  790394 main.go:141] libmachine: (calico-404553) DBG | 
	I0729 21:06:14.388750  790394 main.go:141] libmachine: (calico-404553) DBG | trying to create private KVM network mk-calico-404553 192.168.39.0/24...
	I0729 21:06:14.471049  790394 main.go:141] libmachine: (calico-404553) DBG | private KVM network mk-calico-404553 192.168.39.0/24 created
	I0729 21:06:14.471093  790394 main.go:141] libmachine: (calico-404553) Setting up store path in /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553 ...
	I0729 21:06:14.471108  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:14.471004  790417 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:06:14.471148  790394 main.go:141] libmachine: (calico-404553) Building disk image from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 21:06:14.471173  790394 main.go:141] libmachine: (calico-404553) Downloading /home/jenkins/minikube-integration/19344-733808/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso...
	I0729 21:06:14.736922  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:14.736788  790417 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553/id_rsa...
	I0729 21:06:15.071501  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:15.071354  790417 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553/calico-404553.rawdisk...
	I0729 21:06:15.071535  790394 main.go:141] libmachine: (calico-404553) DBG | Writing magic tar header
	I0729 21:06:15.071546  790394 main.go:141] libmachine: (calico-404553) DBG | Writing SSH key tar header
	I0729 21:06:15.071555  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:15.071471  790417 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553 ...
	I0729 21:06:15.071579  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553
	I0729 21:06:15.071616  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553 (perms=drwx------)
	I0729 21:06:15.071686  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube/machines (perms=drwxr-xr-x)
	I0729 21:06:15.071709  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube/machines
	I0729 21:06:15.071716  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808/.minikube (perms=drwxr-xr-x)
	I0729 21:06:15.071727  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins/minikube-integration/19344-733808 (perms=drwxrwxr-x)
	I0729 21:06:15.071733  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0729 21:06:15.071741  790394 main.go:141] libmachine: (calico-404553) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0729 21:06:15.071747  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:06:15.071753  790394 main.go:141] libmachine: (calico-404553) Creating domain...
	I0729 21:06:15.071765  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19344-733808
	I0729 21:06:15.071774  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0729 21:06:15.071787  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home/jenkins
	I0729 21:06:15.071797  790394 main.go:141] libmachine: (calico-404553) DBG | Checking permissions on dir: /home
	I0729 21:06:15.071806  790394 main.go:141] libmachine: (calico-404553) DBG | Skipping /home - not owner
	I0729 21:06:15.073070  790394 main.go:141] libmachine: (calico-404553) define libvirt domain using xml: 
	I0729 21:06:15.073093  790394 main.go:141] libmachine: (calico-404553) <domain type='kvm'>
	I0729 21:06:15.073103  790394 main.go:141] libmachine: (calico-404553)   <name>calico-404553</name>
	I0729 21:06:15.073111  790394 main.go:141] libmachine: (calico-404553)   <memory unit='MiB'>3072</memory>
	I0729 21:06:15.073136  790394 main.go:141] libmachine: (calico-404553)   <vcpu>2</vcpu>
	I0729 21:06:15.073146  790394 main.go:141] libmachine: (calico-404553)   <features>
	I0729 21:06:15.073155  790394 main.go:141] libmachine: (calico-404553)     <acpi/>
	I0729 21:06:15.073169  790394 main.go:141] libmachine: (calico-404553)     <apic/>
	I0729 21:06:15.073185  790394 main.go:141] libmachine: (calico-404553)     <pae/>
	I0729 21:06:15.073195  790394 main.go:141] libmachine: (calico-404553)     
	I0729 21:06:15.073207  790394 main.go:141] libmachine: (calico-404553)   </features>
	I0729 21:06:15.073220  790394 main.go:141] libmachine: (calico-404553)   <cpu mode='host-passthrough'>
	I0729 21:06:15.073231  790394 main.go:141] libmachine: (calico-404553)   
	I0729 21:06:15.073245  790394 main.go:141] libmachine: (calico-404553)   </cpu>
	I0729 21:06:15.073301  790394 main.go:141] libmachine: (calico-404553)   <os>
	I0729 21:06:15.073321  790394 main.go:141] libmachine: (calico-404553)     <type>hvm</type>
	I0729 21:06:15.073328  790394 main.go:141] libmachine: (calico-404553)     <boot dev='cdrom'/>
	I0729 21:06:15.073333  790394 main.go:141] libmachine: (calico-404553)     <boot dev='hd'/>
	I0729 21:06:15.073341  790394 main.go:141] libmachine: (calico-404553)     <bootmenu enable='no'/>
	I0729 21:06:15.073348  790394 main.go:141] libmachine: (calico-404553)   </os>
	I0729 21:06:15.073354  790394 main.go:141] libmachine: (calico-404553)   <devices>
	I0729 21:06:15.073361  790394 main.go:141] libmachine: (calico-404553)     <disk type='file' device='cdrom'>
	I0729 21:06:15.073372  790394 main.go:141] libmachine: (calico-404553)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553/boot2docker.iso'/>
	I0729 21:06:15.073380  790394 main.go:141] libmachine: (calico-404553)       <target dev='hdc' bus='scsi'/>
	I0729 21:06:15.073385  790394 main.go:141] libmachine: (calico-404553)       <readonly/>
	I0729 21:06:15.073394  790394 main.go:141] libmachine: (calico-404553)     </disk>
	I0729 21:06:15.073428  790394 main.go:141] libmachine: (calico-404553)     <disk type='file' device='disk'>
	I0729 21:06:15.073457  790394 main.go:141] libmachine: (calico-404553)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0729 21:06:15.073487  790394 main.go:141] libmachine: (calico-404553)       <source file='/home/jenkins/minikube-integration/19344-733808/.minikube/machines/calico-404553/calico-404553.rawdisk'/>
	I0729 21:06:15.073502  790394 main.go:141] libmachine: (calico-404553)       <target dev='hda' bus='virtio'/>
	I0729 21:06:15.073515  790394 main.go:141] libmachine: (calico-404553)     </disk>
	I0729 21:06:15.073526  790394 main.go:141] libmachine: (calico-404553)     <interface type='network'>
	I0729 21:06:15.073540  790394 main.go:141] libmachine: (calico-404553)       <source network='mk-calico-404553'/>
	I0729 21:06:15.073551  790394 main.go:141] libmachine: (calico-404553)       <model type='virtio'/>
	I0729 21:06:15.073562  790394 main.go:141] libmachine: (calico-404553)     </interface>
	I0729 21:06:15.073577  790394 main.go:141] libmachine: (calico-404553)     <interface type='network'>
	I0729 21:06:15.073590  790394 main.go:141] libmachine: (calico-404553)       <source network='default'/>
	I0729 21:06:15.073606  790394 main.go:141] libmachine: (calico-404553)       <model type='virtio'/>
	I0729 21:06:15.073617  790394 main.go:141] libmachine: (calico-404553)     </interface>
	I0729 21:06:15.073630  790394 main.go:141] libmachine: (calico-404553)     <serial type='pty'>
	I0729 21:06:15.073641  790394 main.go:141] libmachine: (calico-404553)       <target port='0'/>
	I0729 21:06:15.073648  790394 main.go:141] libmachine: (calico-404553)     </serial>
	I0729 21:06:15.073659  790394 main.go:141] libmachine: (calico-404553)     <console type='pty'>
	I0729 21:06:15.073668  790394 main.go:141] libmachine: (calico-404553)       <target type='serial' port='0'/>
	I0729 21:06:15.073685  790394 main.go:141] libmachine: (calico-404553)     </console>
	I0729 21:06:15.073702  790394 main.go:141] libmachine: (calico-404553)     <rng model='virtio'>
	I0729 21:06:15.073717  790394 main.go:141] libmachine: (calico-404553)       <backend model='random'>/dev/random</backend>
	I0729 21:06:15.073727  790394 main.go:141] libmachine: (calico-404553)     </rng>
	I0729 21:06:15.073734  790394 main.go:141] libmachine: (calico-404553)     
	I0729 21:06:15.073743  790394 main.go:141] libmachine: (calico-404553)     
	I0729 21:06:15.073773  790394 main.go:141] libmachine: (calico-404553)   </devices>
	I0729 21:06:15.073796  790394 main.go:141] libmachine: (calico-404553) </domain>
	I0729 21:06:15.073811  790394 main.go:141] libmachine: (calico-404553) 
	I0729 21:06:15.077988  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:32:56:38 in network default
	I0729 21:06:15.078631  790394 main.go:141] libmachine: (calico-404553) Ensuring networks are active...
	I0729 21:06:15.078660  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:15.079411  790394 main.go:141] libmachine: (calico-404553) Ensuring network default is active
	I0729 21:06:15.079683  790394 main.go:141] libmachine: (calico-404553) Ensuring network mk-calico-404553 is active
	I0729 21:06:15.080330  790394 main.go:141] libmachine: (calico-404553) Getting domain xml...
	I0729 21:06:15.081093  790394 main.go:141] libmachine: (calico-404553) Creating domain...
	I0729 21:06:16.320798  790394 main.go:141] libmachine: (calico-404553) Waiting to get IP...
	I0729 21:06:16.321648  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:16.322170  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:16.322242  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:16.322158  790417 retry.go:31] will retry after 189.574543ms: waiting for machine to come up
	I0729 21:06:16.513639  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:16.514167  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:16.514198  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:16.514113  790417 retry.go:31] will retry after 335.66938ms: waiting for machine to come up
	I0729 21:06:16.851733  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:16.852376  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:16.852427  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:16.852328  790417 retry.go:31] will retry after 464.440495ms: waiting for machine to come up
	I0729 21:06:17.318140  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:17.318609  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:17.318647  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:17.318565  790417 retry.go:31] will retry after 387.441792ms: waiting for machine to come up
	I0729 21:06:17.707998  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:17.708600  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:17.708626  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:17.708534  790417 retry.go:31] will retry after 492.514024ms: waiting for machine to come up
	I0729 21:06:18.202415  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:18.203034  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:18.203067  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:18.202977  790417 retry.go:31] will retry after 639.044739ms: waiting for machine to come up
	I0729 21:06:18.843930  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:18.844543  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:18.844584  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:18.844488  790417 retry.go:31] will retry after 1.006202836s: waiting for machine to come up
	I0729 21:06:17.734348  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:19.734504  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:20.068424  787645 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e 66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1 2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c 2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e 3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8 a8ecbf86eda46d888909ebbbc1bb07a1f494dcd2d1e5e77762188bb48470fe61 23a57386ce096704e1d71fece2722fa32c0c52e0a6fa7a554d5c1f4935209b65 96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111 40192a292d169bce6786cabdbcb54e0a6335a254edbcd8d5544e311ec1783d90 82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791 0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77: (10.461169915s)
	I0729 21:06:20.068524  787645 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0729 21:06:20.113492  787645 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 21:06:20.123882  787645 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 29 21:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5654 Jul 29 21:04 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5759 Jul 29 21:04 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Jul 29 21:04 /etc/kubernetes/scheduler.conf
	
	I0729 21:06:20.123955  787645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 21:06:20.133130  787645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 21:06:20.145109  787645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 21:06:20.156236  787645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 21:06:20.156292  787645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 21:06:20.168108  787645 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 21:06:20.179362  787645 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0729 21:06:20.179422  787645 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 21:06:20.189192  787645 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 21:06:20.198334  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:20.257029  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:21.268336  787645 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.011261763s)
	I0729 21:06:21.268371  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:21.507289  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:21.571214  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:21.633192  787645 api_server.go:52] waiting for apiserver process to appear ...
	I0729 21:06:21.633296  787645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:06:22.133739  787645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:06:19.852363  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:19.852825  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:19.852869  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:19.852773  790417 retry.go:31] will retry after 1.034826019s: waiting for machine to come up
	I0729 21:06:20.889112  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:20.889727  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:20.889757  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:20.889682  790417 retry.go:31] will retry after 1.228583091s: waiting for machine to come up
	I0729 21:06:22.119498  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:22.120147  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:22.120199  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:22.120093  790417 retry.go:31] will retry after 2.143683377s: waiting for machine to come up
	I0729 21:06:24.265352  790394 main.go:141] libmachine: (calico-404553) DBG | domain calico-404553 has defined MAC address 52:54:00:8d:f6:98 in network mk-calico-404553
	I0729 21:06:24.265871  790394 main.go:141] libmachine: (calico-404553) DBG | unable to find current IP address of domain calico-404553 in network mk-calico-404553
	I0729 21:06:24.265906  790394 main.go:141] libmachine: (calico-404553) DBG | I0729 21:06:24.265825  790417 retry.go:31] will retry after 2.253196758s: waiting for machine to come up
	I0729 21:06:21.735064  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:24.236221  788420 node_ready.go:53] node "kindnet-404553" has status "Ready":"False"
	I0729 21:06:24.736769  788420 node_ready.go:49] node "kindnet-404553" has status "Ready":"True"
	I0729 21:06:24.736857  788420 node_ready.go:38] duration metric: took 16.006635993s for node "kindnet-404553" to be "Ready" ...
	I0729 21:06:24.736885  788420 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:06:24.754947  788420 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-q2v2r" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.764082  788420 pod_ready.go:92] pod "coredns-7db6d8ff4d-q2v2r" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:25.764119  788420 pod_ready.go:81] duration metric: took 1.009128326s for pod "coredns-7db6d8ff4d-q2v2r" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.764138  788420 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.769251  788420 pod_ready.go:92] pod "etcd-kindnet-404553" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:25.769290  788420 pod_ready.go:81] duration metric: took 5.142449ms for pod "etcd-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.769308  788420 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.775035  788420 pod_ready.go:92] pod "kube-apiserver-kindnet-404553" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:25.775061  788420 pod_ready.go:81] duration metric: took 5.742668ms for pod "kube-apiserver-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.775072  788420 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.780081  788420 pod_ready.go:92] pod "kube-controller-manager-kindnet-404553" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:25.780103  788420 pod_ready.go:81] duration metric: took 5.022273ms for pod "kube-controller-manager-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.780116  788420 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-lg24g" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.935951  788420 pod_ready.go:92] pod "kube-proxy-lg24g" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:25.935985  788420 pod_ready.go:81] duration metric: took 155.85969ms for pod "kube-proxy-lg24g" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:25.936000  788420 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:26.334627  788420 pod_ready.go:92] pod "kube-scheduler-kindnet-404553" in "kube-system" namespace has status "Ready":"True"
	I0729 21:06:26.334658  788420 pod_ready.go:81] duration metric: took 398.648144ms for pod "kube-scheduler-kindnet-404553" in "kube-system" namespace to be "Ready" ...
	I0729 21:06:26.334672  788420 pod_ready.go:38] duration metric: took 1.597734307s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:06:26.334690  788420 api_server.go:52] waiting for apiserver process to appear ...
	I0729 21:06:26.334755  788420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:06:26.349226  788420 api_server.go:72] duration metric: took 18.372512599s to wait for apiserver process to appear ...
	I0729 21:06:26.349256  788420 api_server.go:88] waiting for apiserver healthz status ...
	I0729 21:06:26.349303  788420 api_server.go:253] Checking apiserver healthz at https://192.168.61.198:8443/healthz ...
	I0729 21:06:26.355377  788420 api_server.go:279] https://192.168.61.198:8443/healthz returned 200:
	ok
	I0729 21:06:26.356429  788420 api_server.go:141] control plane version: v1.30.3
	I0729 21:06:26.356455  788420 api_server.go:131] duration metric: took 7.193317ms to wait for apiserver health ...
	I0729 21:06:26.356463  788420 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 21:06:26.538708  788420 system_pods.go:59] 8 kube-system pods found
	I0729 21:06:26.538747  788420 system_pods.go:61] "coredns-7db6d8ff4d-q2v2r" [fd11759a-d34b-4fdd-b668-0e6beb6d89b9] Running
	I0729 21:06:26.538754  788420 system_pods.go:61] "etcd-kindnet-404553" [9e9200db-c8ca-4ee8-92c6-c487da906cf3] Running
	I0729 21:06:26.538760  788420 system_pods.go:61] "kindnet-dgdk7" [bd518258-2a62-4690-8773-5ff15e0ba821] Running
	I0729 21:06:26.538767  788420 system_pods.go:61] "kube-apiserver-kindnet-404553" [9e171f8d-e195-4bd1-9f7f-eaa0d995be85] Running
	I0729 21:06:26.538774  788420 system_pods.go:61] "kube-controller-manager-kindnet-404553" [1f134a8c-bb08-4c8f-a6ef-ec3ca7b02bf6] Running
	I0729 21:06:26.538780  788420 system_pods.go:61] "kube-proxy-lg24g" [f57a6772-9268-4f6e-a684-e402c4bc71fa] Running
	I0729 21:06:26.538785  788420 system_pods.go:61] "kube-scheduler-kindnet-404553" [800551e0-0d77-4dcb-9d9d-38f7a2724c22] Running
	I0729 21:06:26.538789  788420 system_pods.go:61] "storage-provisioner" [37b9d77a-c8df-4ec4-9109-e7732e48dd64] Running
	I0729 21:06:26.538796  788420 system_pods.go:74] duration metric: took 182.327128ms to wait for pod list to return data ...
	I0729 21:06:26.538812  788420 default_sa.go:34] waiting for default service account to be created ...
	I0729 21:06:22.634017  787645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:06:22.653180  787645 api_server.go:72] duration metric: took 1.019987581s to wait for apiserver process to appear ...
	I0729 21:06:22.653214  787645 api_server.go:88] waiting for apiserver healthz status ...
	I0729 21:06:22.653241  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:24.940872  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 21:06:24.940915  787645 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 21:06:24.940929  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:24.967106  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0729 21:06:24.967142  787645 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0729 21:06:25.153349  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:25.159566  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 21:06:25.159675  787645 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 21:06:25.653974  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:25.662766  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 21:06:25.662800  787645 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 21:06:26.153915  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:26.162443  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0729 21:06:26.162475  787645 api_server.go:103] status: https://192.168.50.242:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0729 21:06:26.654086  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:26.659143  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0729 21:06:26.666630  787645 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 21:06:26.666662  787645 api_server.go:131] duration metric: took 4.013438499s to wait for apiserver health ...
	I0729 21:06:26.666674  787645 cni.go:84] Creating CNI manager for ""
	I0729 21:06:26.666682  787645 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:06:26.668467  787645 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0729 21:06:26.669886  787645 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0729 21:06:26.681721  787645 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0729 21:06:26.703636  787645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 21:06:26.715164  787645 system_pods.go:59] 8 kube-system pods found
	I0729 21:06:26.715192  787645 system_pods.go:61] "coredns-5cfdc65f69-gv2fq" [6b25524d-d6b0-4252-8155-8b06731b95e2] Running
	I0729 21:06:26.715196  787645 system_pods.go:61] "coredns-5cfdc65f69-jrvps" [ed098d39-429a-4f0c-a164-b7d157e7ace3] Running
	I0729 21:06:26.715203  787645 system_pods.go:61] "etcd-kubernetes-upgrade-171355" [1c57b510-92bc-4475-9fd0-c80e528fa865] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 21:06:26.715209  787645 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-171355" [03f1c771-d287-4489-ac1b-c7a9bd802127] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 21:06:26.715217  787645 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-171355" [1ed6e830-6df7-4cbe-addf-a930955ff45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 21:06:26.715223  787645 system_pods.go:61] "kube-proxy-h8b9w" [57eb6810-3346-41c7-ac5f-b511e215af50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 21:06:26.715232  787645 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-171355" [dc2c427d-969c-4abd-9452-8afb27e4c4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 21:06:26.715237  787645 system_pods.go:61] "storage-provisioner" [ed661b7f-a645-4fa1-a75f-559f6ccf63ab] Running
	I0729 21:06:26.715243  787645 system_pods.go:74] duration metric: took 11.586919ms to wait for pod list to return data ...
	I0729 21:06:26.715250  787645 node_conditions.go:102] verifying NodePressure condition ...
	I0729 21:06:26.720888  787645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 21:06:26.720918  787645 node_conditions.go:123] node cpu capacity is 2
	I0729 21:06:26.720930  787645 node_conditions.go:105] duration metric: took 5.674622ms to run NodePressure ...
	I0729 21:06:26.720950  787645 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0729 21:06:27.052627  787645 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 21:06:27.064460  787645 ops.go:34] apiserver oom_adj: -16
	I0729 21:06:27.064491  787645 kubeadm.go:597] duration metric: took 17.835980229s to restartPrimaryControlPlane
	I0729 21:06:27.064502  787645 kubeadm.go:394] duration metric: took 18.840170674s to StartCluster
	I0729 21:06:27.064525  787645 settings.go:142] acquiring lock: {Name:mk9a2eb797f60b19768f4bfa250a8d2214a5ca12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:06:27.064620  787645 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:06:27.065880  787645 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/kubeconfig: {Name:mk9e65e9af9b71b889324d8c5e2a1adfebbca588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:06:27.066183  787645 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.242 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 21:06:27.066329  787645 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 21:06:27.066400  787645 config.go:182] Loaded profile config "kubernetes-upgrade-171355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 21:06:27.066425  787645 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-171355"
	I0729 21:06:27.066403  787645 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-171355"
	I0729 21:06:27.066492  787645 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-171355"
	I0729 21:06:27.066496  787645 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-171355"
	W0729 21:06:27.066512  787645 addons.go:243] addon storage-provisioner should already be in state true
	I0729 21:06:27.066560  787645 host.go:66] Checking if "kubernetes-upgrade-171355" exists ...
	I0729 21:06:27.066884  787645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:27.066926  787645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:27.066969  787645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:27.067018  787645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:27.067882  787645 out.go:177] * Verifying Kubernetes components...
	I0729 21:06:27.069626  787645 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:06:27.082468  787645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46731
	I0729 21:06:27.082559  787645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46021
	I0729 21:06:27.082971  787645 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:27.083018  787645 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:27.083597  787645 main.go:141] libmachine: Using API Version  1
	I0729 21:06:27.083621  787645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:27.083733  787645 main.go:141] libmachine: Using API Version  1
	I0729 21:06:27.083758  787645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:27.084053  787645 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:27.084097  787645 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:27.084266  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetState
	I0729 21:06:27.084638  787645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:27.084673  787645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:27.087338  787645 kapi.go:59] client config for kubernetes-upgrade-171355: &rest.Config{Host:"https://192.168.50.242:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.crt", KeyFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/profiles/kubernetes-upgrade-171355/client.key", CAFile:"/home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(
nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d02d40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0729 21:06:27.087701  787645 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-171355"
	W0729 21:06:27.087719  787645 addons.go:243] addon default-storageclass should already be in state true
	I0729 21:06:27.087749  787645 host.go:66] Checking if "kubernetes-upgrade-171355" exists ...
	I0729 21:06:27.088144  787645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:27.088181  787645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:27.100651  787645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41607
	I0729 21:06:27.101183  787645 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:27.101729  787645 main.go:141] libmachine: Using API Version  1
	I0729 21:06:27.101754  787645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:27.102313  787645 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:27.102526  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetState
	I0729 21:06:27.103151  787645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42737
	I0729 21:06:27.103587  787645 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:27.104026  787645 main.go:141] libmachine: Using API Version  1
	I0729 21:06:27.104069  787645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:27.104508  787645 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:27.104590  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 21:06:27.105146  787645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:06:27.105191  787645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:06:27.107410  787645 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 21:06:26.734607  788420 default_sa.go:45] found service account: "default"
	I0729 21:06:26.734632  788420 default_sa.go:55] duration metric: took 195.809612ms for default service account to be created ...
	I0729 21:06:26.734642  788420 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 21:06:26.938527  788420 system_pods.go:86] 8 kube-system pods found
	I0729 21:06:26.938569  788420 system_pods.go:89] "coredns-7db6d8ff4d-q2v2r" [fd11759a-d34b-4fdd-b668-0e6beb6d89b9] Running
	I0729 21:06:26.938576  788420 system_pods.go:89] "etcd-kindnet-404553" [9e9200db-c8ca-4ee8-92c6-c487da906cf3] Running
	I0729 21:06:26.938580  788420 system_pods.go:89] "kindnet-dgdk7" [bd518258-2a62-4690-8773-5ff15e0ba821] Running
	I0729 21:06:26.938584  788420 system_pods.go:89] "kube-apiserver-kindnet-404553" [9e171f8d-e195-4bd1-9f7f-eaa0d995be85] Running
	I0729 21:06:26.938588  788420 system_pods.go:89] "kube-controller-manager-kindnet-404553" [1f134a8c-bb08-4c8f-a6ef-ec3ca7b02bf6] Running
	I0729 21:06:26.938592  788420 system_pods.go:89] "kube-proxy-lg24g" [f57a6772-9268-4f6e-a684-e402c4bc71fa] Running
	I0729 21:06:26.938596  788420 system_pods.go:89] "kube-scheduler-kindnet-404553" [800551e0-0d77-4dcb-9d9d-38f7a2724c22] Running
	I0729 21:06:26.938602  788420 system_pods.go:89] "storage-provisioner" [37b9d77a-c8df-4ec4-9109-e7732e48dd64] Running
	I0729 21:06:26.938611  788420 system_pods.go:126] duration metric: took 203.962589ms to wait for k8s-apps to be running ...
	I0729 21:06:26.938621  788420 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 21:06:26.938674  788420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 21:06:26.954905  788420 system_svc.go:56] duration metric: took 16.270627ms WaitForService to wait for kubelet
	I0729 21:06:26.954943  788420 kubeadm.go:582] duration metric: took 18.97823405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:06:26.954969  788420 node_conditions.go:102] verifying NodePressure condition ...
	I0729 21:06:27.135327  788420 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 21:06:27.135350  788420 node_conditions.go:123] node cpu capacity is 2
	I0729 21:06:27.135363  788420 node_conditions.go:105] duration metric: took 180.388683ms to run NodePressure ...
	I0729 21:06:27.135375  788420 start.go:241] waiting for startup goroutines ...
	I0729 21:06:27.135382  788420 start.go:246] waiting for cluster config update ...
	I0729 21:06:27.135392  788420 start.go:255] writing updated cluster config ...
	I0729 21:06:27.135628  788420 ssh_runner.go:195] Run: rm -f paused
	I0729 21:06:27.185232  788420 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 21:06:27.187391  788420 out.go:177] * Done! kubectl is now configured to use "kindnet-404553" cluster and "default" namespace by default
	I0729 21:06:27.108682  787645 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 21:06:27.108704  787645 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 21:06:27.108724  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 21:06:27.111360  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 21:06:27.111814  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 21:06:27.111840  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 21:06:27.112009  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 21:06:27.112220  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 21:06:27.112376  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 21:06:27.112551  787645 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 21:06:27.122251  787645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40115
	I0729 21:06:27.122788  787645 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:06:27.123410  787645 main.go:141] libmachine: Using API Version  1
	I0729 21:06:27.123432  787645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:06:27.123992  787645 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:06:27.124378  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetState
	I0729 21:06:27.125847  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .DriverName
	I0729 21:06:27.126040  787645 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 21:06:27.126055  787645 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 21:06:27.126069  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHHostname
	I0729 21:06:27.128815  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 21:06:27.129229  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:aa:dd", ip: ""} in network mk-kubernetes-upgrade-171355: {Iface:virbr2 ExpiryTime:2024-07-29 21:59:07 +0000 UTC Type:0 Mac:52:54:00:53:aa:dd Iaid: IPaddr:192.168.50.242 Prefix:24 Hostname:kubernetes-upgrade-171355 Clientid:01:52:54:00:53:aa:dd}
	I0729 21:06:27.129250  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | domain kubernetes-upgrade-171355 has defined IP address 192.168.50.242 and MAC address 52:54:00:53:aa:dd in network mk-kubernetes-upgrade-171355
	I0729 21:06:27.129436  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHPort
	I0729 21:06:27.129582  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHKeyPath
	I0729 21:06:27.129741  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .GetSSHUsername
	I0729 21:06:27.129848  787645 sshutil.go:53] new ssh client: &{IP:192.168.50.242 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/kubernetes-upgrade-171355/id_rsa Username:docker}
	I0729 21:06:27.291118  787645 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 21:06:27.306278  787645 api_server.go:52] waiting for apiserver process to appear ...
	I0729 21:06:27.306364  787645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:06:27.322254  787645 api_server.go:72] duration metric: took 256.030564ms to wait for apiserver process to appear ...
	I0729 21:06:27.322297  787645 api_server.go:88] waiting for apiserver healthz status ...
	I0729 21:06:27.322331  787645 api_server.go:253] Checking apiserver healthz at https://192.168.50.242:8443/healthz ...
	I0729 21:06:27.328055  787645 api_server.go:279] https://192.168.50.242:8443/healthz returned 200:
	ok
	I0729 21:06:27.328935  787645 api_server.go:141] control plane version: v1.31.0-beta.0
	I0729 21:06:27.328957  787645 api_server.go:131] duration metric: took 6.644557ms to wait for apiserver health ...
	I0729 21:06:27.328964  787645 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 21:06:27.336805  787645 system_pods.go:59] 8 kube-system pods found
	I0729 21:06:27.336830  787645 system_pods.go:61] "coredns-5cfdc65f69-gv2fq" [6b25524d-d6b0-4252-8155-8b06731b95e2] Running
	I0729 21:06:27.336834  787645 system_pods.go:61] "coredns-5cfdc65f69-jrvps" [ed098d39-429a-4f0c-a164-b7d157e7ace3] Running
	I0729 21:06:27.336841  787645 system_pods.go:61] "etcd-kubernetes-upgrade-171355" [1c57b510-92bc-4475-9fd0-c80e528fa865] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0729 21:06:27.336847  787645 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-171355" [03f1c771-d287-4489-ac1b-c7a9bd802127] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0729 21:06:27.336856  787645 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-171355" [1ed6e830-6df7-4cbe-addf-a930955ff45c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0729 21:06:27.336863  787645 system_pods.go:61] "kube-proxy-h8b9w" [57eb6810-3346-41c7-ac5f-b511e215af50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0729 21:06:27.336881  787645 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-171355" [dc2c427d-969c-4abd-9452-8afb27e4c4e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0729 21:06:27.336885  787645 system_pods.go:61] "storage-provisioner" [ed661b7f-a645-4fa1-a75f-559f6ccf63ab] Running
	I0729 21:06:27.336891  787645 system_pods.go:74] duration metric: took 7.922059ms to wait for pod list to return data ...
	I0729 21:06:27.336899  787645 kubeadm.go:582] duration metric: took 270.685546ms to wait for: map[apiserver:true system_pods:true]
	I0729 21:06:27.336914  787645 node_conditions.go:102] verifying NodePressure condition ...
	I0729 21:06:27.339564  787645 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 21:06:27.339583  787645 node_conditions.go:123] node cpu capacity is 2
	I0729 21:06:27.339592  787645 node_conditions.go:105] duration metric: took 2.673945ms to run NodePressure ...
	I0729 21:06:27.339605  787645 start.go:241] waiting for startup goroutines ...
	I0729 21:06:27.486203  787645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 21:06:27.486879  787645 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 21:06:27.671360  787645 main.go:141] libmachine: Making call to close driver server
	I0729 21:06:27.671393  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Close
	I0729 21:06:27.671723  787645 main.go:141] libmachine: Successfully made call to close driver server
	I0729 21:06:27.671744  787645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 21:06:27.671756  787645 main.go:141] libmachine: Making call to close driver server
	I0729 21:06:27.671764  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Close
	I0729 21:06:27.672040  787645 main.go:141] libmachine: Successfully made call to close driver server
	I0729 21:06:27.672064  787645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 21:06:27.681746  787645 main.go:141] libmachine: Making call to close driver server
	I0729 21:06:27.681767  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Close
	I0729 21:06:27.682088  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Closing plugin on server side
	I0729 21:06:27.682150  787645 main.go:141] libmachine: Successfully made call to close driver server
	I0729 21:06:27.682177  787645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 21:06:28.146381  787645 main.go:141] libmachine: Making call to close driver server
	I0729 21:06:28.146406  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Close
	I0729 21:06:28.146725  787645 main.go:141] libmachine: Successfully made call to close driver server
	I0729 21:06:28.146744  787645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 21:06:28.146754  787645 main.go:141] libmachine: Making call to close driver server
	I0729 21:06:28.146762  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) Calling .Close
	I0729 21:06:28.146763  787645 main.go:141] libmachine: (kubernetes-upgrade-171355) DBG | Closing plugin on server side
	I0729 21:06:28.146993  787645 main.go:141] libmachine: Successfully made call to close driver server
	I0729 21:06:28.147007  787645 main.go:141] libmachine: Making call to close connection to plugin binary
	I0729 21:06:28.148857  787645 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0729 21:06:28.150164  787645 addons.go:510] duration metric: took 1.083851471s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0729 21:06:28.150206  787645 start.go:246] waiting for cluster config update ...
	I0729 21:06:28.150221  787645 start.go:255] writing updated cluster config ...
	I0729 21:06:28.150465  787645 ssh_runner.go:195] Run: rm -f paused
	I0729 21:06:28.198810  787645 start.go:600] kubectl: 1.30.3, cluster: 1.31.0-beta.0 (minor skew: 1)
	I0729 21:06:28.200695  787645 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-171355" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.916089110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287188916053388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1926de9-a88d-400b-8a20-e2d53c18b11c name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.916810909Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=825681fc-9c00-4980-8376-268e5c65f4ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.916887457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=825681fc-9c00-4980-8376-268e5c65f4ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.917521714Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65cd831b460a369bfa98be8a1ac6cf5c327bb25a90bcf9e9fef9d730d49489af,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722287182107810451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8300e4a49509e72642a5e3f250c27c459746389eefd29f17a134c39d465fb4,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722287182129323397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e243ff1d611d3a7bc4c0b0818da5e9551a248e298ecedbf4fa3db66bf5ee3bc,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722287182115856182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692fcebb577b651a4e161251fb4f8cf59a2a94ee25f314b9e38b9172ec188f5,PodSandboxId:123231a763545397975cde89f945c18159e7bfc6dcc5fdaeab07e2d2efcc2d7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722287169833567828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b511569e53c23e343fcdef376fb166d45f8aaee3d4e832b9d46c1f2344363d76,PodSandboxId:037f40bff420b11a0266d39bffb04134a8d6f0f5c350ed958921aa4c5d435de3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169730576194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c255beeb6c0a3bb9d0cfd3336438149d7ff5e7c166ca1f6099344cbb1515e8c,PodSandboxId:ec960a8a4a1960e3152e2486a9fd1656644ae292f329c8104d6a9ed2e56b8dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722287169726078004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d229b62be637cf0103664a6d6b6f426304b53761dea90c4517ea7bc0d97d4b,PodSandboxId:4d8a9ad61a40eb95499820ff4a3c559203cdf2a92b7ee0d9cf33da8f86372659,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169538408852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0
c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172228716872
1353995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722287168674533109,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722287168545910257,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e,PodSandboxId:074c2d76298c73edeb136cddeff61c73b5a62eca56f2ed3f4f4757957912dadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075366245484,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8,PodSandboxId:8aff9cdccca7ef1e88f8859f7fca5df9167c16933a2d8d60cf3f274735684bc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075221618229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111,PodSandboxId:d823cddeedf9aba9140845e65d6566828c9a
f7030d51a494693a958107b6dcd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722287074296183121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791,PodSandboxId:c1b5bb93db70f653f2220c41902ca3efd3b01a5f76
a39bf7cffd0f94032c61a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722287055676897253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77,PodSandboxId:fc7cb7fbdf6de451447d02ff512f8aca3162ccaed37cfe3bcc1d4c8
31e8f667f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722287054701577470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8b9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eb6810-3346-41c7-ac5f-b511e215af50,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=825681fc-9c00-4980-8376-268e5c65f4ca name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.977839603Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=47e9d8ca-0aeb-431e-9847-5902b1f07b9f name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.977945566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=47e9d8ca-0aeb-431e-9847-5902b1f07b9f name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.979666028Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=429acb0b-9528-48d8-8eb6-5298a2a4cad2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.980512227Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287188980474826,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=429acb0b-9528-48d8-8eb6-5298a2a4cad2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.981353823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9486a33-a64a-4dc8-8e7a-b1cc21137e35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.981445816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9486a33-a64a-4dc8-8e7a-b1cc21137e35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:28 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:28.981883429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65cd831b460a369bfa98be8a1ac6cf5c327bb25a90bcf9e9fef9d730d49489af,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722287182107810451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8300e4a49509e72642a5e3f250c27c459746389eefd29f17a134c39d465fb4,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722287182129323397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e243ff1d611d3a7bc4c0b0818da5e9551a248e298ecedbf4fa3db66bf5ee3bc,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722287182115856182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692fcebb577b651a4e161251fb4f8cf59a2a94ee25f314b9e38b9172ec188f5,PodSandboxId:123231a763545397975cde89f945c18159e7bfc6dcc5fdaeab07e2d2efcc2d7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722287169833567828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b511569e53c23e343fcdef376fb166d45f8aaee3d4e832b9d46c1f2344363d76,PodSandboxId:037f40bff420b11a0266d39bffb04134a8d6f0f5c350ed958921aa4c5d435de3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169730576194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c255beeb6c0a3bb9d0cfd3336438149d7ff5e7c166ca1f6099344cbb1515e8c,PodSandboxId:ec960a8a4a1960e3152e2486a9fd1656644ae292f329c8104d6a9ed2e56b8dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722287169726078004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d229b62be637cf0103664a6d6b6f426304b53761dea90c4517ea7bc0d97d4b,PodSandboxId:4d8a9ad61a40eb95499820ff4a3c559203cdf2a92b7ee0d9cf33da8f86372659,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169538408852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0
c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172228716872
1353995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722287168674533109,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722287168545910257,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e,PodSandboxId:074c2d76298c73edeb136cddeff61c73b5a62eca56f2ed3f4f4757957912dadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075366245484,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8,PodSandboxId:8aff9cdccca7ef1e88f8859f7fca5df9167c16933a2d8d60cf3f274735684bc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075221618229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111,PodSandboxId:d823cddeedf9aba9140845e65d6566828c9a
f7030d51a494693a958107b6dcd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722287074296183121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791,PodSandboxId:c1b5bb93db70f653f2220c41902ca3efd3b01a5f76
a39bf7cffd0f94032c61a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722287055676897253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77,PodSandboxId:fc7cb7fbdf6de451447d02ff512f8aca3162ccaed37cfe3bcc1d4c8
31e8f667f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722287054701577470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8b9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eb6810-3346-41c7-ac5f-b511e215af50,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9486a33-a64a-4dc8-8e7a-b1cc21137e35 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.029785671Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ab363f8-3354-42ae-8cce-c7b7d3dab4c5 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.029886790Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ab363f8-3354-42ae-8cce-c7b7d3dab4c5 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.031541170Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1350e6b5-0e7f-4089-8354-aee90d7032d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.031904316Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287189031882117,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1350e6b5-0e7f-4089-8354-aee90d7032d0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.032387578Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a8fd587b-0158-4407-a239-aa68790684bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.032447591Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a8fd587b-0158-4407-a239-aa68790684bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.032933662Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65cd831b460a369bfa98be8a1ac6cf5c327bb25a90bcf9e9fef9d730d49489af,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722287182107810451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8300e4a49509e72642a5e3f250c27c459746389eefd29f17a134c39d465fb4,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722287182129323397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e243ff1d611d3a7bc4c0b0818da5e9551a248e298ecedbf4fa3db66bf5ee3bc,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722287182115856182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692fcebb577b651a4e161251fb4f8cf59a2a94ee25f314b9e38b9172ec188f5,PodSandboxId:123231a763545397975cde89f945c18159e7bfc6dcc5fdaeab07e2d2efcc2d7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722287169833567828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b511569e53c23e343fcdef376fb166d45f8aaee3d4e832b9d46c1f2344363d76,PodSandboxId:037f40bff420b11a0266d39bffb04134a8d6f0f5c350ed958921aa4c5d435de3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169730576194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c255beeb6c0a3bb9d0cfd3336438149d7ff5e7c166ca1f6099344cbb1515e8c,PodSandboxId:ec960a8a4a1960e3152e2486a9fd1656644ae292f329c8104d6a9ed2e56b8dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722287169726078004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d229b62be637cf0103664a6d6b6f426304b53761dea90c4517ea7bc0d97d4b,PodSandboxId:4d8a9ad61a40eb95499820ff4a3c559203cdf2a92b7ee0d9cf33da8f86372659,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169538408852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0
c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172228716872
1353995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722287168674533109,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722287168545910257,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e,PodSandboxId:074c2d76298c73edeb136cddeff61c73b5a62eca56f2ed3f4f4757957912dadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075366245484,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8,PodSandboxId:8aff9cdccca7ef1e88f8859f7fca5df9167c16933a2d8d60cf3f274735684bc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075221618229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111,PodSandboxId:d823cddeedf9aba9140845e65d6566828c9a
f7030d51a494693a958107b6dcd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722287074296183121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791,PodSandboxId:c1b5bb93db70f653f2220c41902ca3efd3b01a5f76
a39bf7cffd0f94032c61a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722287055676897253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77,PodSandboxId:fc7cb7fbdf6de451447d02ff512f8aca3162ccaed37cfe3bcc1d4c8
31e8f667f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722287054701577470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8b9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eb6810-3346-41c7-ac5f-b511e215af50,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a8fd587b-0158-4407-a239-aa68790684bb name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.073316063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5972fa24-9ee4-4b12-a40b-630c7d00e2c6 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.073437453Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5972fa24-9ee4-4b12-a40b-630c7d00e2c6 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.074736905Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0e2fb8bb-2075-405d-beec-6bde5dc804a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.075213534Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287189075182956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125257,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0e2fb8bb-2075-405d-beec-6bde5dc804a0 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.076270771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9f6c88c1-7e81-422d-9d0b-c868de6fb3e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.076329183Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9f6c88c1-7e81-422d-9d0b-c868de6fb3e7 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:06:29 kubernetes-upgrade-171355 crio[3187]: time="2024-07-29 21:06:29.076608666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:65cd831b460a369bfa98be8a1ac6cf5c327bb25a90bcf9e9fef9d730d49489af,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_RUNNING,CreatedAt:1722287182107810451,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 3,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8300e4a49509e72642a5e3f250c27c459746389eefd29f17a134c39d465fb4,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_RUNNING,CreatedAt:1722287182129323397,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.contai
ner.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e243ff1d611d3a7bc4c0b0818da5e9551a248e298ecedbf4fa3db66bf5ee3bc,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_RUNNING,CreatedAt:1722287182115856182,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 3,io.kubernetes.co
ntainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9692fcebb577b651a4e161251fb4f8cf59a2a94ee25f314b9e38b9172ec188f5,PodSandboxId:123231a763545397975cde89f945c18159e7bfc6dcc5fdaeab07e2d2efcc2d7b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1722287169833567828,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b511569e53c23e343fcdef376fb166d45f8aaee3d4e832b9d46c1f2344363d76,PodSandboxId:037f40bff420b11a0266d39bffb04134a8d6f0f5c350ed958921aa4c5d435de3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169730576194,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c255beeb6c0a3bb9d0cfd3336438149d7ff5e7c166ca1f6099344cbb1515e8c,PodSandboxId:ec960a8a4a1960e3152e2486a9fd1656644ae292f329c8104d6a9ed2e56b8dce,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_RUNNING,CreatedAt:1722287169726078004,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.ku
bernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d229b62be637cf0103664a6d6b6f426304b53761dea90c4517ea7bc0d97d4b,PodSandboxId:4d8a9ad61a40eb95499820ff4a3c559203cdf2a92b7ee0d9cf33da8f86372659,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287169538408852,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0
c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e,PodSandboxId:92930d7dc2e683313b1209d1883000ac0db13bccfed4ff675a19656d38f35adc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938,State:CONTAINER_EXITED,CreatedAt:172228716872
1353995,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c36c7d421a423eaa5b3ec703392e2d37,},Annotations:map[string]string{io.kubernetes.container.hash: ecb4da08,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1,PodSandboxId:567a225e66c5eb49151df09ab69de113f35370b1b850dcb1b86bfb1f1b224ee9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa,State:CONTAINER_EXITED,CreatedAt:1722287168674533109,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 67e74de1adae252c6ba4b93d51aa3146,},Annotations:map[string]string{io.kubernetes.container.hash: e06de91,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c,PodSandboxId:e14f8e5053fffa10566e305d4ea7fd995794a87e6d7c75e93956e8898343aa7d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5,State:CONTAINER_EXITED,CreatedAt:1722287168545910257,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c42b066dd261d509e9c9201269618378,},Annotations:map[string]string{io.kubernetes.container.hash: ec666c98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e,PodSandboxId:074c2d76298c73edeb136cddeff61c73b5a62eca56f2ed3f4f4757957912dadc,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075366245484,Labels:map[string]str
ing{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-gv2fq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6b25524d-d6b0-4252-8155-8b06731b95e2,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8,PodSandboxId:8aff9cdccca7ef1e88f8859f7fca5df9167c16933a2d8d60cf3f274735684bc0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]s
tring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287075221618229,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5cfdc65f69-jrvps,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed098d39-429a-4f0c-a164-b7d157e7ace3,},Annotations:map[string]string{io.kubernetes.container.hash: e6f52134,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111,PodSandboxId:d823cddeedf9aba9140845e65d6566828c9a
f7030d51a494693a958107b6dcd5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b,State:CONTAINER_EXITED,CreatedAt:1722287074296183121,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-171355,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4cb859a8044705795f72f0646ed35345,},Annotations:map[string]string{io.kubernetes.container.hash: 9efbbee0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791,PodSandboxId:c1b5bb93db70f653f2220c41902ca3efd3b01a5f76
a39bf7cffd0f94032c61a7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1722287055676897253,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed661b7f-a645-4fa1-a75f-559f6ccf63ab,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77,PodSandboxId:fc7cb7fbdf6de451447d02ff512f8aca3162ccaed37cfe3bcc1d4c8
31e8f667f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899,State:CONTAINER_EXITED,CreatedAt:1722287054701577470,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8b9w,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57eb6810-3346-41c7-ac5f-b511e215af50,},Annotations:map[string]string{io.kubernetes.container.hash: 65225ff2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9f6c88c1-7e81-422d-9d0b-c868de6fb3e7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dc8300e4a4950       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   7 seconds ago        Running             kube-controller-manager   3                   e14f8e5053fff       kube-controller-manager-kubernetes-upgrade-171355
	4e243ff1d611d       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   7 seconds ago        Running             etcd                      3                   567a225e66c5e       etcd-kubernetes-upgrade-171355
	65cd831b460a3       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   7 seconds ago        Running             kube-apiserver            3                   92930d7dc2e68       kube-apiserver-kubernetes-upgrade-171355
	9692fcebb577b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   19 seconds ago       Running             storage-provisioner       2                   123231a763545       storage-provisioner
	b511569e53c23       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   2                   037f40bff420b       coredns-5cfdc65f69-gv2fq
	3c255beeb6c0a       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   19 seconds ago       Running             kube-scheduler            2                   ec960a8a4a196       kube-scheduler-kubernetes-upgrade-171355
	33d229b62be63       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   19 seconds ago       Running             coredns                   2                   4d8a9ad61a40e       coredns-5cfdc65f69-jrvps
	788ca70c3b478       f9a39d2c9991aa5d83965dc6fd8ee9c9c189ddfbced4d47cde4c035a76619938   20 seconds ago       Exited              kube-apiserver            2                   92930d7dc2e68       kube-apiserver-kubernetes-upgrade-171355
	66b3c9c7d3f80       cfec37af81d9116b198de584ada5b179d0a5ce037d244d2c42b3772a1df479aa   20 seconds ago       Exited              etcd                      2                   567a225e66c5e       etcd-kubernetes-upgrade-171355
	2b4bc8a75b5fb       63cf9a9f4bf5d7e47bcdba087f782ef541d17692664c091301b7cc9bc68af5b5   20 seconds ago       Exited              kube-controller-manager   2                   e14f8e5053fff       kube-controller-manager-kubernetes-upgrade-171355
	2c17e65ab0a06       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   1                   074c2d76298c7       coredns-5cfdc65f69-gv2fq
	3e01cc6db7196       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   1                   8aff9cdccca7e       coredns-5cfdc65f69-jrvps
	96ff3f6cc7d89       d2edabc17c519a8be8c306c60f58842358b021d6d7de1ad3e84b45e890f7cc4b   About a minute ago   Exited              kube-scheduler            1                   d823cddeedf9a       kube-scheduler-kubernetes-upgrade-171355
	82db8371a49e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   2 minutes ago        Exited              storage-provisioner       1                   c1b5bb93db70f       storage-provisioner
	0d6cb694e2c4b       c6c6581369906a51688e90adc077e111a3353440810ade7a4cea193d366c5899   2 minutes ago        Exited              kube-proxy                0                   fc7cb7fbdf6de       kube-proxy-h8b9w
	
	
	==> coredns [2c17e65ab0a065151fd4a1d767310dbbe9a828ddeddef099f9e68a9ecfed333e] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [33d229b62be637cf0103664a6d6b6f426304b53761dea90c4517ea7bc0d97d4b] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: unknown (get namespaces)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: unknown (get services)
	
	
	==> coredns [3e01cc6db7196ba1185bcbd1430e9ca9df83a6ebbf7c81d3bb6486570e4567f8] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [b511569e53c23e343fcdef376fb166d45f8aaee3d4e832b9d46c1f2344363d76] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-171355
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-171355
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 21:04:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-171355
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 21:06:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 21:06:25 +0000   Mon, 29 Jul 2024 21:04:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 21:06:25 +0000   Mon, 29 Jul 2024 21:04:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 21:06:25 +0000   Mon, 29 Jul 2024 21:04:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 21:06:25 +0000   Mon, 29 Jul 2024 21:04:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.242
	  Hostname:    kubernetes-upgrade-171355
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 96d677321ce04992b407815a6bea5d28
	  System UUID:                96d67732-1ce0-4992-b407-815a6bea5d28
	  Boot ID:                    8b4482d3-410d-46e6-a248-03bf62c4fc54
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.0-beta.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5cfdc65f69-gv2fq                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m15s
	  kube-system                 coredns-5cfdc65f69-jrvps                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m15s
	  kube-system                 etcd-kubernetes-upgrade-171355                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m19s
	  kube-system                 kube-apiserver-kubernetes-upgrade-171355             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m20s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-171355    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-h8b9w                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m15s
	  kube-system                 kube-scheduler-kubernetes-upgrade-171355             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x7 over 2m27s)  kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node kubernetes-upgrade-171355 event: Registered Node kubernetes-upgrade-171355 in Controller
	  Normal  Starting                 8s                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)        kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)        kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)        kubelet          Node kubernetes-upgrade-171355 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                     node-controller  Node kubernetes-upgrade-171355 event: Registered Node kubernetes-upgrade-171355 in Controller
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.468440] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.068571] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057238] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.189513] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.135675] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.306254] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +4.095170] systemd-fstab-generator[734]: Ignoring "noauto" option for root device
	[Jul29 21:04] systemd-fstab-generator[854]: Ignoring "noauto" option for root device
	[  +0.087639] kauditd_printk_skb: 158 callbacks suppressed
	[  +9.311730] systemd-fstab-generator[1245]: Ignoring "noauto" option for root device
	[  +0.083448] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.122973] kauditd_printk_skb: 107 callbacks suppressed
	[ +18.340462] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.246513] systemd-fstab-generator[2888]: Ignoring "noauto" option for root device
	[  +0.286138] systemd-fstab-generator[2951]: Ignoring "noauto" option for root device
	[  +0.213176] systemd-fstab-generator[2983]: Ignoring "noauto" option for root device
	[  +0.408487] systemd-fstab-generator[3017]: Ignoring "noauto" option for root device
	[Jul29 21:06] systemd-fstab-generator[3325]: Ignoring "noauto" option for root device
	[  +0.102457] kauditd_printk_skb: 203 callbacks suppressed
	[ +13.773434] systemd-fstab-generator[4252]: Ignoring "noauto" option for root device
	[  +0.092774] kauditd_printk_skb: 119 callbacks suppressed
	[  +5.684777] systemd-fstab-generator[4550]: Ignoring "noauto" option for root device
	[  +0.106711] kauditd_printk_skb: 41 callbacks suppressed
	
	
	==> etcd [4e243ff1d611d3a7bc4c0b0818da5e9551a248e298ecedbf4fa3db66bf5ee3bc] <==
	{"level":"info","ts":"2024-07-29T21:06:22.388848Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T21:06:22.393589Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T21:06:22.393667Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-29T21:06:22.393856Z","caller":"embed/etcd.go:598","msg":"serving peer traffic","address":"192.168.50.242:2380"}
	{"level":"info","ts":"2024-07-29T21:06:22.393895Z","caller":"embed/etcd.go:570","msg":"cmux::serve","address":"192.168.50.242:2380"}
	{"level":"info","ts":"2024-07-29T21:06:22.396296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de switched to configuration voters=(16072963974139777246)"}
	{"level":"info","ts":"2024-07-29T21:06:22.396405Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"12beef96e2c8ec46","local-member-id":"df0ea393864090de","added-peer-id":"df0ea393864090de","added-peer-peer-urls":["https://192.168.50.242:2380"]}
	{"level":"info","ts":"2024-07-29T21:06:22.396535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"12beef96e2c8ec46","local-member-id":"df0ea393864090de","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T21:06:22.396577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T21:06:23.56116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:23.561228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:23.561306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de received MsgPreVoteResp from df0ea393864090de at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:23.561326Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became candidate at term 4"}
	{"level":"info","ts":"2024-07-29T21:06:23.561334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de received MsgVoteResp from df0ea393864090de at term 4"}
	{"level":"info","ts":"2024-07-29T21:06:23.561346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became leader at term 4"}
	{"level":"info","ts":"2024-07-29T21:06:23.561356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: df0ea393864090de elected leader df0ea393864090de at term 4"}
	{"level":"info","ts":"2024-07-29T21:06:23.566642Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"df0ea393864090de","local-member-attributes":"{Name:kubernetes-upgrade-171355 ClientURLs:[https://192.168.50.242:2379]}","request-path":"/0/members/df0ea393864090de/attributes","cluster-id":"12beef96e2c8ec46","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T21:06:23.566825Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:06:23.566973Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:06:23.567361Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T21:06:23.567421Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T21:06:23.56837Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T21:06:23.568456Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T21:06:23.569735Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T21:06:23.56987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.242:2379"}
	
	
	==> etcd [66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1] <==
	{"level":"info","ts":"2024-07-29T21:06:10.497131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T21:06:10.497159Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de received MsgPreVoteResp from df0ea393864090de at term 2"}
	{"level":"info","ts":"2024-07-29T21:06:10.497177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:10.497182Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de received MsgVoteResp from df0ea393864090de at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:10.49719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"df0ea393864090de became leader at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:10.497198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: df0ea393864090de elected leader df0ea393864090de at term 3"}
	{"level":"info","ts":"2024-07-29T21:06:10.501239Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"df0ea393864090de","local-member-attributes":"{Name:kubernetes-upgrade-171355 ClientURLs:[https://192.168.50.242:2379]}","request-path":"/0/members/df0ea393864090de/attributes","cluster-id":"12beef96e2c8ec46","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T21:06:10.501397Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:06:10.501466Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:06:10.501838Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T21:06:10.501869Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T21:06:10.502469Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T21:06:10.504456Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.242:2379"}
	{"level":"info","ts":"2024-07-29T21:06:10.510696Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-07-29T21:06:10.515817Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T21:06:19.73524Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T21:06:19.735334Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-171355","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.242:2380"],"advertise-client-urls":["https://192.168.50.242:2379"]}
	{"level":"warn","ts":"2024-07-29T21:06:19.735444Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:06:19.735484Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:06:19.737337Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.242:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:06:19.737394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.242:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T21:06:19.737454Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"df0ea393864090de","current-leader-member-id":"df0ea393864090de"}
	{"level":"info","ts":"2024-07-29T21:06:19.740832Z","caller":"embed/etcd.go:580","msg":"stopping serving peer traffic","address":"192.168.50.242:2380"}
	{"level":"info","ts":"2024-07-29T21:06:19.740947Z","caller":"embed/etcd.go:585","msg":"stopped serving peer traffic","address":"192.168.50.242:2380"}
	{"level":"info","ts":"2024-07-29T21:06:19.740976Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-171355","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.242:2380"],"advertise-client-urls":["https://192.168.50.242:2379"]}
	
	
	==> kernel <==
	 21:06:29 up 2 min,  0 users,  load average: 0.86, 0.39, 0.15
	Linux kubernetes-upgrade-171355 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [65cd831b460a369bfa98be8a1ac6cf5c327bb25a90bcf9e9fef9d730d49489af] <==
	I0729 21:06:24.994137       1 establishing_controller.go:79] Starting EstablishingController
	I0729 21:06:24.994157       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0729 21:06:24.994205       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0729 21:06:24.994244       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0729 21:06:25.029080       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0729 21:06:25.037720       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0729 21:06:25.037756       1 policy_source.go:224] refreshing policies
	I0729 21:06:25.051846       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 21:06:25.061391       1 cache.go:39] Caches are synced for autoregister controller
	I0729 21:06:25.090054       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0729 21:06:25.092686       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0729 21:06:25.093375       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0729 21:06:25.090134       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0729 21:06:25.095387       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0729 21:06:25.095483       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0729 21:06:25.090177       1 shared_informer.go:320] Caches are synced for configmaps
	I0729 21:06:25.895663       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0729 21:06:26.206203       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.242]
	I0729 21:06:26.208462       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 21:06:26.216590       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 21:06:26.835129       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 21:06:26.849306       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 21:06:26.887715       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 21:06:27.013459       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 21:06:27.026719       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e] <==
	I0729 21:06:12.671538       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 21:06:12.671618       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 21:06:12.671660       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0729 21:06:12.675693       1 dynamic_serving_content.go:149] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 21:06:12.675790       1 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:06:12.682212       1 controller.go:157] Shutting down quota evaluator
	I0729 21:06:12.682885       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:06:12.682973       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:06:12.682986       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:06:12.683002       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:06:12.683096       1 controller.go:176] quota evaluator worker shutdown
	W0729 21:06:13.419922       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 21:06:13.419943       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0729 21:06:14.420329       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 21:06:14.420358       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0729 21:06:15.419675       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 21:06:15.419710       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0729 21:06:16.420671       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 21:06:16.420851       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0729 21:06:17.419903       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0729 21:06:17.420306       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0729 21:06:18.419730       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0729 21:06:18.419981       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	E0729 21:06:19.419787       1 storage_rbac.go:187] "Unhandled Error" err="unable to initialize clusterroles: Get \"https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles\": dial tcp 127.0.0.1:8443: connect: connection refused" logger="UnhandledError"
	W0729 21:06:19.419830       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-controller-manager [2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c] <==
	I0729 21:06:10.686827       1 serving.go:386] Generated self-signed cert in-memory
	I0729 21:06:11.286800       1 controllermanager.go:188] "Starting" version="v1.31.0-beta.0"
	I0729 21:06:11.286842       1 controllermanager.go:190] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:06:11.288656       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:06:11.288757       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:06:11.291278       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0729 21:06:11.291344       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [dc8300e4a49509e72642a5e3f250c27c459746389eefd29f17a134c39d465fb4] <==
	I0729 21:06:28.557062       1 shared_informer.go:320] Caches are synced for deployment
	I0729 21:06:28.571361       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0729 21:06:28.613425       1 shared_informer.go:320] Caches are synced for taint
	I0729 21:06:28.613758       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0729 21:06:28.614251       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-171355"
	I0729 21:06:28.614336       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 21:06:28.734452       1 shared_informer.go:320] Caches are synced for daemon sets
	I0729 21:06:28.755704       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 21:06:28.782367       1 shared_informer.go:320] Caches are synced for disruption
	I0729 21:06:29.170427       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0729 21:06:29.228315       1 shared_informer.go:320] Caches are synced for crt configmap
	I0729 21:06:29.256322       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0729 21:06:29.342564       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0729 21:06:29.342670       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0729 21:06:29.342749       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0729 21:06:29.344983       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0729 21:06:29.356295       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 21:06:29.403568       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 21:06:29.406981       1 shared_informer.go:320] Caches are synced for PV protection
	I0729 21:06:29.454464       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:06:29.465440       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 21:06:29.467807       1 shared_informer.go:320] Caches are synced for HPA
	I0729 21:06:29.506045       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:06:29.506074       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0729 21:06:29.520325       1 shared_informer.go:320] Caches are synced for resource quota
	
	
	==> kube-proxy [0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0729 21:04:14.942722       1 proxier.go:705] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0729 21:04:14.952185       1 server.go:682] "Successfully retrieved node IP(s)" IPs=["192.168.50.242"]
	E0729 21:04:14.952256       1 server.go:235] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0729 21:04:14.983144       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0729 21:04:14.983191       1 server.go:246] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 21:04:14.983251       1 server_linux.go:170] "Using iptables Proxier"
	I0729 21:04:14.985849       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0729 21:04:14.986266       1 server.go:488] "Version info" version="v1.31.0-beta.0"
	I0729 21:04:14.986298       1 server.go:490] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:14.987745       1 config.go:197] "Starting service config controller"
	I0729 21:04:14.987776       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 21:04:14.987803       1 config.go:104] "Starting endpoint slice config controller"
	I0729 21:04:14.987807       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 21:04:14.988574       1 config.go:326] "Starting node config controller"
	I0729 21:04:14.988602       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 21:04:15.088371       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 21:04:15.088454       1 shared_informer.go:320] Caches are synced for service config
	I0729 21:04:15.088655       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3c255beeb6c0a3bb9d0cfd3336438149d7ff5e7c166ca1f6099344cbb1515e8c] <==
	I0729 21:06:10.944579       1 serving.go:386] Generated self-signed cert in-memory
	W0729 21:06:12.471573       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 21:06:12.471647       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 21:06:12.471657       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:06:12.471663       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:06:12.624003       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 21:06:12.625963       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:06:12.634350       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 21:06:12.637250       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:06:12.637293       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:06:12.637327       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0729 21:06:12.737814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 21:06:24.926479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	E0729 21:06:24.926871       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)" logger="UnhandledError"
	E0729 21:06:24.927108       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0729 21:06:24.938989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: unknown (get pods)" logger="UnhandledError"
	E0729 21:06:24.941412       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0729 21:06:24.941810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: unknown (get csistoragecapacities.storage.k8s.io)" logger="UnhandledError"
	E0729 21:06:24.942067       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: unknown (get csidrivers.storage.k8s.io)" logger="UnhandledError"
	
	
	==> kube-scheduler [96ff3f6cc7d8969ef2d8d95c56ab52175bd0994743acc269bbc415578923e111] <==
	I0729 21:04:36.928646       1 serving.go:386] Generated self-signed cert in-memory
	W0729 21:04:47.965999       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.50.242:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0729 21:04:47.966176       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:04:47.966186       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:04:57.649916       1 server.go:164] "Starting Kubernetes Scheduler" version="v1.31.0-beta.0"
	I0729 21:04:57.649934       1 server.go:166] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0729 21:04:57.649954       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I0729 21:04:57.653875       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:04:57.653906       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0729 21:04:57.653923       1 shared_informer.go:316] "Unhandled Error" err="unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file" logger="UnhandledError"
	I0729 21:04:57.654126       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E0729 21:04:57.654211       1 server.go:237] "waiting for handlers to sync" err="context canceled"
	E0729 21:04:57.654258       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 29 21:06:21 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:21.843722    4259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c36c7d421a423eaa5b3ec703392e2d37-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-171355\" (UID: \"c36c7d421a423eaa5b3ec703392e2d37\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-171355"
	Jul 29 21:06:21 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:21.843737    4259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c42b066dd261d509e9c9201269618378-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-171355\" (UID: \"c42b066dd261d509e9c9201269618378\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-171355"
	Jul 29 21:06:21 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:21.847303    4259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-171355?timeout=10s\": dial tcp 192.168.50.242:8443: connect: connection refused" interval="400ms"
	Jul 29 21:06:21 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:21.947990    4259 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-171355"
	Jul 29 21:06:21 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:21.948762    4259 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.242:8443: connect: connection refused" node="kubernetes-upgrade-171355"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:22.087804    4259 scope.go:117] "RemoveContainer" containerID="66b3c9c7d3f809ccb859b1dec7db956bb5e88e7469c4b27e3a4bc208987f09d1"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:22.087961    4259 scope.go:117] "RemoveContainer" containerID="788ca70c3b478c793672ae6eac10ee62000aa70a1843351c83a0a4dfc4ea173e"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:22.095217    4259 scope.go:117] "RemoveContainer" containerID="2b4bc8a75b5fbde2897e52cf10e7fb2aa14ff0165ba56895b1806224e60f297c"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:22.250152    4259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-171355?timeout=10s\": dial tcp 192.168.50.242:8443: connect: connection refused" interval="800ms"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:22.351142    4259 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-171355"
	Jul 29 21:06:22 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:22.352246    4259 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.242:8443: connect: connection refused" node="kubernetes-upgrade-171355"
	Jul 29 21:06:23 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:23.154573    4259 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-171355"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.082098    4259 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-171355"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.082235    4259 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-171355"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.082278    4259 kuberuntime_manager.go:1524] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.083740    4259 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.628650    4259 apiserver.go:52] "Watching apiserver"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.640052    4259 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.720959    4259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ed661b7f-a645-4fa1-a75f-559f6ccf63ab-tmp\") pod \"storage-provisioner\" (UID: \"ed661b7f-a645-4fa1-a75f-559f6ccf63ab\") " pod="kube-system/storage-provisioner"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.721278    4259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57eb6810-3346-41c7-ac5f-b511e215af50-xtables-lock\") pod \"kube-proxy-h8b9w\" (UID: \"57eb6810-3346-41c7-ac5f-b511e215af50\") " pod="kube-system/kube-proxy-h8b9w"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.721695    4259 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57eb6810-3346-41c7-ac5f-b511e215af50-lib-modules\") pod \"kube-proxy-h8b9w\" (UID: \"57eb6810-3346-41c7-ac5f-b511e215af50\") " pod="kube-system/kube-proxy-h8b9w"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: I0729 21:06:25.935645    4259 scope.go:117] "RemoveContainer" containerID="0d6cb694e2c4becb7b39c714268607c4dab680aa88ed540d1d1afdca46f1fd77"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:25.943484    4259 log.go:32] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = the container name \"k8s_kube-proxy_kube-proxy-h8b9w_kube-system_57eb6810-3346-41c7-ac5f-b511e215af50_1\" is already in use by d3a46bb5dc307896613ff81df04095807930745c20a3809df645fd31cf6a3b28. You have to remove that container to be able to reuse that name: that name is already in use" podSandboxID="ea21e233e685ff9943d1f6cacd91c8289af72fc93677b4fd4d89b9b2c72a8446"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:25.943631    4259 kuberuntime_manager.go:1257] "Unhandled Error" err="container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.31.0-beta.0,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modul
es,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7b4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-proxy-h8b9w_kube-system(57eb6810-3346-41c7-ac5f-b511e215af50): CreateContainerError: the container name \"k8s_kube-proxy_kube-p
roxy-h8b9w_kube-system_57eb6810-3346-41c7-ac5f-b511e215af50_1\" is already in use by d3a46bb5dc307896613ff81df04095807930745c20a3809df645fd31cf6a3b28. You have to remove that container to be able to reuse that name: that name is already in use" logger="UnhandledError"
	Jul 29 21:06:25 kubernetes-upgrade-171355 kubelet[4259]: E0729 21:06:25.944851    4259 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"the container name \\\"k8s_kube-proxy_kube-proxy-h8b9w_kube-system_57eb6810-3346-41c7-ac5f-b511e215af50_1\\\" is already in use by d3a46bb5dc307896613ff81df04095807930745c20a3809df645fd31cf6a3b28. You have to remove that container to be able to reuse that name: that name is already in use\"" pod="kube-system/kube-proxy-h8b9w" podUID="57eb6810-3346-41c7-ac5f-b511e215af50"
	
	
	==> storage-provisioner [82db8371a49e22502ad66d11237b303e810f279078735a974d36c61c2bbca791] <==
	I0729 21:04:15.771987       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 21:04:15.785225       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 21:04:15.785382       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 21:04:15.798638       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 21:04:15.799068       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-171355_c526c5a0-b1a7-4e2d-9f25-dee611472f82!
	I0729 21:04:15.800489       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"75bc20c1-c55d-4785-b471-5ca9b548050d", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-171355_c526c5a0-b1a7-4e2d-9f25-dee611472f82 became leader
	I0729 21:04:15.899297       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-171355_c526c5a0-b1a7-4e2d-9f25-dee611472f82!
	
	
	==> storage-provisioner [9692fcebb577b651a4e161251fb4f8cf59a2a94ee25f314b9e38b9172ec188f5] <==
	I0729 21:06:10.408952       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 21:06:12.580770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 21:06:12.580875       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0729 21:06:13.693480       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 21:06:17.145576       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0729 21:06:21.404117       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-171355 -n kubernetes-upgrade-171355
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-171355 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-171355" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-171355
--- FAIL: TestKubernetesUpgrade (484.31s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (64.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-913034 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-913034 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.19182178s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-913034] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-913034" primary control-plane node in "pause-913034" cluster
	* Updating the running kvm2 "pause-913034" VM ...
	* Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-913034" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 21:03:51.759008  787271 out.go:291] Setting OutFile to fd 1 ...
	I0729 21:03:51.759142  787271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:03:51.759152  787271 out.go:304] Setting ErrFile to fd 2...
	I0729 21:03:51.759159  787271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:03:51.759456  787271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 21:03:51.760246  787271 out.go:298] Setting JSON to false
	I0729 21:03:51.761668  787271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":17179,"bootTime":1722269853,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 21:03:51.761750  787271 start.go:139] virtualization: kvm guest
	I0729 21:03:51.764230  787271 out.go:177] * [pause-913034] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 21:03:51.765814  787271 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 21:03:51.765817  787271 notify.go:220] Checking for updates...
	I0729 21:03:51.767120  787271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 21:03:51.768530  787271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:03:51.769903  787271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:03:51.771333  787271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 21:03:51.772719  787271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 21:03:51.774573  787271 config.go:182] Loaded profile config "pause-913034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:03:51.775077  787271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:03:51.775131  787271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:03:51.791945  787271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43983
	I0729 21:03:51.792424  787271 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:03:51.792977  787271 main.go:141] libmachine: Using API Version  1
	I0729 21:03:51.793000  787271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:03:51.793382  787271 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:03:51.793692  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:03:51.793997  787271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 21:03:51.794425  787271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:03:51.794464  787271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:03:51.809999  787271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32845
	I0729 21:03:51.810501  787271 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:03:51.811032  787271 main.go:141] libmachine: Using API Version  1
	I0729 21:03:51.811075  787271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:03:51.811473  787271 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:03:51.811691  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:03:51.849952  787271 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 21:03:51.851342  787271 start.go:297] selected driver: kvm2
	I0729 21:03:51.851369  787271 start.go:901] validating driver "kvm2" against &{Name:pause-913034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.30.3 ClusterName:pause-913034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.20 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devi
ce-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:03:51.851570  787271 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 21:03:51.852169  787271 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:03:51.852265  787271 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 21:03:51.868306  787271 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 21:03:51.869291  787271 cni.go:84] Creating CNI manager for ""
	I0729 21:03:51.869311  787271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:03:51.869391  787271 start.go:340] cluster config:
	{Name:pause-913034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:pause-913034 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.20 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:f
alse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:03:51.869592  787271 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:03:51.871351  787271 out.go:177] * Starting "pause-913034" primary control-plane node in "pause-913034" cluster
	I0729 21:03:51.872473  787271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 21:03:51.872528  787271 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 21:03:51.872544  787271 cache.go:56] Caching tarball of preloaded images
	I0729 21:03:51.872640  787271 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 21:03:51.872655  787271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 21:03:51.872810  787271 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/config.json ...
	I0729 21:03:51.873078  787271 start.go:360] acquireMachinesLock for pause-913034: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:03:54.600960  787271 start.go:364] duration metric: took 2.727804524s to acquireMachinesLock for "pause-913034"
	I0729 21:03:54.601041  787271 start.go:96] Skipping create...Using existing machine configuration
	I0729 21:03:54.601054  787271 fix.go:54] fixHost starting: 
	I0729 21:03:54.601531  787271 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 21:03:54.601594  787271 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 21:03:54.620899  787271 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I0729 21:03:54.621379  787271 main.go:141] libmachine: () Calling .GetVersion
	I0729 21:03:54.622011  787271 main.go:141] libmachine: Using API Version  1
	I0729 21:03:54.622040  787271 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 21:03:54.622514  787271 main.go:141] libmachine: () Calling .GetMachineName
	I0729 21:03:54.622752  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:03:54.622931  787271 main.go:141] libmachine: (pause-913034) Calling .GetState
	I0729 21:03:54.624833  787271 fix.go:112] recreateIfNeeded on pause-913034: state=Running err=<nil>
	W0729 21:03:54.624857  787271 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 21:03:54.626695  787271 out.go:177] * Updating the running kvm2 "pause-913034" VM ...
	I0729 21:03:54.628051  787271 machine.go:94] provisionDockerMachine start ...
	I0729 21:03:54.628083  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:03:54.628374  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:54.631396  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.631854  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:54.631877  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.632081  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:03:54.632248  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.632446  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.632734  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:03:54.632950  787271 main.go:141] libmachine: Using SSH client type: native
	I0729 21:03:54.633184  787271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.20 22 <nil> <nil>}
	I0729 21:03:54.633196  787271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 21:03:54.741867  787271 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-913034
	
	I0729 21:03:54.741900  787271 main.go:141] libmachine: (pause-913034) Calling .GetMachineName
	I0729 21:03:54.742185  787271 buildroot.go:166] provisioning hostname "pause-913034"
	I0729 21:03:54.742217  787271 main.go:141] libmachine: (pause-913034) Calling .GetMachineName
	I0729 21:03:54.742473  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:54.746029  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.746590  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:54.746624  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.746849  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:03:54.747089  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.747288  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.747491  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:03:54.747719  787271 main.go:141] libmachine: Using SSH client type: native
	I0729 21:03:54.747963  787271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.20 22 <nil> <nil>}
	I0729 21:03:54.747984  787271 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-913034 && echo "pause-913034" | sudo tee /etc/hostname
	I0729 21:03:54.874929  787271 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-913034
	
	I0729 21:03:54.874968  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:54.878329  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.878776  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:54.878822  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.878987  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:03:54.879212  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.879417  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:54.879598  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:03:54.879812  787271 main.go:141] libmachine: Using SSH client type: native
	I0729 21:03:54.880060  787271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.20 22 <nil> <nil>}
	I0729 21:03:54.880084  787271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-913034' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-913034/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-913034' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 21:03:54.989119  787271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 21:03:54.989194  787271 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19344-733808/.minikube CaCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19344-733808/.minikube}
	I0729 21:03:54.989264  787271 buildroot.go:174] setting up certificates
	I0729 21:03:54.989281  787271 provision.go:84] configureAuth start
	I0729 21:03:54.989302  787271 main.go:141] libmachine: (pause-913034) Calling .GetMachineName
	I0729 21:03:54.989672  787271 main.go:141] libmachine: (pause-913034) Calling .GetIP
	I0729 21:03:54.992847  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.993174  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:54.993214  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.993394  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:54.996239  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.996644  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:54.996672  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:54.996846  787271 provision.go:143] copyHostCerts
	I0729 21:03:54.996913  787271 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem, removing ...
	I0729 21:03:54.996924  787271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem
	I0729 21:03:54.996979  787271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/ca.pem (1078 bytes)
	I0729 21:03:54.997082  787271 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem, removing ...
	I0729 21:03:54.997090  787271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem
	I0729 21:03:54.997115  787271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/cert.pem (1123 bytes)
	I0729 21:03:54.997189  787271 exec_runner.go:144] found /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem, removing ...
	I0729 21:03:54.997199  787271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem
	I0729 21:03:54.997230  787271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19344-733808/.minikube/key.pem (1679 bytes)
	I0729 21:03:54.997324  787271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem org=jenkins.pause-913034 san=[127.0.0.1 192.168.61.20 localhost minikube pause-913034]
	I0729 21:03:55.395685  787271 provision.go:177] copyRemoteCerts
	I0729 21:03:55.395804  787271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 21:03:55.395841  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:55.399030  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:55.399485  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:55.399535  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:55.399739  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:03:55.400053  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:55.400331  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:03:55.400506  787271 sshutil.go:53] new ssh client: &{IP:192.168.61.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/pause-913034/id_rsa Username:docker}
	I0729 21:03:55.486732  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 21:03:55.520082  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0729 21:03:55.550339  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 21:03:55.580232  787271 provision.go:87] duration metric: took 590.933719ms to configureAuth
	I0729 21:03:55.580268  787271 buildroot.go:189] setting minikube options for container-runtime
	I0729 21:03:55.580537  787271 config.go:182] Loaded profile config "pause-913034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:03:55.580662  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:03:55.584014  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:55.584501  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:03:55.584534  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:03:55.584794  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:03:55.585052  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:55.585276  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:03:55.585478  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:03:55.585695  787271 main.go:141] libmachine: Using SSH client type: native
	I0729 21:03:55.585869  787271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.20 22 <nil> <nil>}
	I0729 21:03:55.585909  787271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0729 21:04:01.588258  787271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0729 21:04:01.588292  787271 machine.go:97] duration metric: took 6.960219273s to provisionDockerMachine
	I0729 21:04:01.588309  787271 start.go:293] postStartSetup for "pause-913034" (driver="kvm2")
	I0729 21:04:01.588324  787271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 21:04:01.588346  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:04:01.588725  787271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 21:04:01.588762  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:04:01.592010  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.592511  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:01.592542  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.592794  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:04:01.592997  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:04:01.593174  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:04:01.593381  787271 sshutil.go:53] new ssh client: &{IP:192.168.61.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/pause-913034/id_rsa Username:docker}
	I0729 21:04:01.693952  787271 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 21:04:01.698379  787271 info.go:137] Remote host: Buildroot 2023.02.9
	I0729 21:04:01.698406  787271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/addons for local assets ...
	I0729 21:04:01.698479  787271 filesync.go:126] Scanning /home/jenkins/minikube-integration/19344-733808/.minikube/files for local assets ...
	I0729 21:04:01.698573  787271 filesync.go:149] local asset: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem -> 7409622.pem in /etc/ssl/certs
	I0729 21:04:01.698659  787271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 21:04:01.708196  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 21:04:01.732888  787271 start.go:296] duration metric: took 144.560955ms for postStartSetup
	I0729 21:04:01.732938  787271 fix.go:56] duration metric: took 7.131884409s for fixHost
	I0729 21:04:01.732966  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:04:01.735681  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.736298  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:01.736341  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.736522  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:04:01.736718  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:04:01.736902  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:04:01.737064  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:04:01.737315  787271 main.go:141] libmachine: Using SSH client type: native
	I0729 21:04:01.737552  787271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82da80] 0x8307e0 <nil>  [] 0s} 192.168.61.20 22 <nil> <nil>}
	I0729 21:04:01.737575  787271 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0729 21:04:01.846521  787271 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722287041.838174927
	
	I0729 21:04:01.846552  787271 fix.go:216] guest clock: 1722287041.838174927
	I0729 21:04:01.846564  787271 fix.go:229] Guest: 2024-07-29 21:04:01.838174927 +0000 UTC Remote: 2024-07-29 21:04:01.732943995 +0000 UTC m=+10.019842909 (delta=105.230932ms)
	I0729 21:04:01.846611  787271 fix.go:200] guest clock delta is within tolerance: 105.230932ms
	I0729 21:04:01.846617  787271 start.go:83] releasing machines lock for "pause-913034", held for 7.245616916s
	I0729 21:04:01.846638  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:04:01.846924  787271 main.go:141] libmachine: (pause-913034) Calling .GetIP
	I0729 21:04:01.850162  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.850594  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:01.850634  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.850769  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:04:01.851372  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:04:01.851589  787271 main.go:141] libmachine: (pause-913034) Calling .DriverName
	I0729 21:04:01.851682  787271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 21:04:01.851740  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:04:01.852112  787271 ssh_runner.go:195] Run: cat /version.json
	I0729 21:04:01.852142  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHHostname
	I0729 21:04:01.855226  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.855656  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:01.855758  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.856211  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:04:01.856219  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.856410  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:04:01.856594  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:04:01.856802  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:01.856824  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:01.856805  787271 sshutil.go:53] new ssh client: &{IP:192.168.61.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/pause-913034/id_rsa Username:docker}
	I0729 21:04:01.857373  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHPort
	I0729 21:04:01.857576  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHKeyPath
	I0729 21:04:01.857769  787271 main.go:141] libmachine: (pause-913034) Calling .GetSSHUsername
	I0729 21:04:01.858008  787271 sshutil.go:53] new ssh client: &{IP:192.168.61.20 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/pause-913034/id_rsa Username:docker}
	I0729 21:04:01.971792  787271 ssh_runner.go:195] Run: systemctl --version
	I0729 21:04:01.979172  787271 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0729 21:04:02.135870  787271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0729 21:04:02.141675  787271 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0729 21:04:02.141776  787271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 21:04:02.152415  787271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 21:04:02.152448  787271 start.go:495] detecting cgroup driver to use...
	I0729 21:04:02.152526  787271 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0729 21:04:02.178632  787271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0729 21:04:02.196901  787271 docker.go:216] disabling cri-docker service (if available) ...
	I0729 21:04:02.196966  787271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 21:04:02.214690  787271 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 21:04:02.236257  787271 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 21:04:02.395884  787271 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 21:04:02.560314  787271 docker.go:232] disabling docker service ...
	I0729 21:04:02.560410  787271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 21:04:02.577572  787271 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 21:04:02.591686  787271 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 21:04:02.718785  787271 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 21:04:02.857792  787271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 21:04:02.871199  787271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 21:04:02.889895  787271 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0729 21:04:02.889975  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.900105  787271 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0729 21:04:02.900168  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.913859  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.925033  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.936426  787271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 21:04:02.948127  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.958356  787271 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.969856  787271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0729 21:04:02.979708  787271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 21:04:02.989117  787271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 21:04:03.001744  787271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:04:03.197764  787271 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0729 21:04:03.527588  787271 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0729 21:04:03.527691  787271 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0729 21:04:03.535031  787271 start.go:563] Will wait 60s for crictl version
	I0729 21:04:03.535132  787271 ssh_runner.go:195] Run: which crictl
	I0729 21:04:03.541565  787271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 21:04:03.593743  787271 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0729 21:04:03.593838  787271 ssh_runner.go:195] Run: crio --version
	I0729 21:04:03.633090  787271 ssh_runner.go:195] Run: crio --version
	I0729 21:04:03.667368  787271 out.go:177] * Preparing Kubernetes v1.30.3 on CRI-O 1.29.1 ...
	I0729 21:04:03.668585  787271 main.go:141] libmachine: (pause-913034) Calling .GetIP
	I0729 21:04:03.672310  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:03.672836  787271 main.go:141] libmachine: (pause-913034) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:55:fa:69", ip: ""} in network mk-pause-913034: {Iface:virbr3 ExpiryTime:2024-07-29 22:03:04 +0000 UTC Type:0 Mac:52:54:00:55:fa:69 Iaid: IPaddr:192.168.61.20 Prefix:24 Hostname:pause-913034 Clientid:01:52:54:00:55:fa:69}
	I0729 21:04:03.672879  787271 main.go:141] libmachine: (pause-913034) DBG | domain pause-913034 has defined IP address 192.168.61.20 and MAC address 52:54:00:55:fa:69 in network mk-pause-913034
	I0729 21:04:03.673123  787271 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0729 21:04:03.678821  787271 kubeadm.go:883] updating cluster {Name:pause-913034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:pause-913034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.20 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 21:04:03.679017  787271 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 21:04:03.679094  787271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 21:04:03.739878  787271 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 21:04:03.739911  787271 crio.go:433] Images already preloaded, skipping extraction
	I0729 21:04:03.739980  787271 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 21:04:03.792350  787271 crio.go:514] all images are preloaded for cri-o runtime.
	I0729 21:04:03.792382  787271 cache_images.go:84] Images are preloaded, skipping loading
	I0729 21:04:03.792392  787271 kubeadm.go:934] updating node { 192.168.61.20 8443 v1.30.3 crio true true} ...
	I0729 21:04:03.792521  787271 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-913034 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.20
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:pause-913034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 21:04:03.792631  787271 ssh_runner.go:195] Run: crio config
	I0729 21:04:03.861961  787271 cni.go:84] Creating CNI manager for ""
	I0729 21:04:03.861992  787271 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:04:03.862004  787271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 21:04:03.862034  787271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.20 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-913034 NodeName:pause-913034 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.20"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.20 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 21:04:03.862230  787271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.20
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-913034"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.20
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.20"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 21:04:03.862307  787271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 21:04:03.873627  787271 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 21:04:03.873716  787271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 21:04:03.886118  787271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0729 21:04:03.907809  787271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 21:04:03.928116  787271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0729 21:04:03.949509  787271 ssh_runner.go:195] Run: grep 192.168.61.20	control-plane.minikube.internal$ /etc/hosts
	I0729 21:04:03.954166  787271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:04:04.148750  787271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 21:04:04.165154  787271 certs.go:68] Setting up /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034 for IP: 192.168.61.20
	I0729 21:04:04.165182  787271 certs.go:194] generating shared ca certs ...
	I0729 21:04:04.165203  787271 certs.go:226] acquiring lock for ca certs: {Name:mk1ee0b90d042110a8e3a69ee9f87466f00fd9d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:04:04.165388  787271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key
	I0729 21:04:04.165482  787271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key
	I0729 21:04:04.165496  787271 certs.go:256] generating profile certs ...
	I0729 21:04:04.165628  787271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/client.key
	I0729 21:04:04.165710  787271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/apiserver.key.8621f971
	I0729 21:04:04.165766  787271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/proxy-client.key
	I0729 21:04:04.165910  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem (1338 bytes)
	W0729 21:04:04.165953  787271 certs.go:480] ignoring /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962_empty.pem, impossibly tiny 0 bytes
	I0729 21:04:04.165968  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 21:04:04.166014  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/ca.pem (1078 bytes)
	I0729 21:04:04.166067  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/cert.pem (1123 bytes)
	I0729 21:04:04.166112  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/certs/key.pem (1679 bytes)
	I0729 21:04:04.166172  787271 certs.go:484] found cert: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem (1708 bytes)
	I0729 21:04:04.167028  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 21:04:04.192635  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 21:04:04.219821  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 21:04:04.246959  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0729 21:04:04.277274  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 21:04:04.306457  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 21:04:04.334253  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 21:04:04.362565  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/pause-913034/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 21:04:04.392255  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/ssl/certs/7409622.pem --> /usr/share/ca-certificates/7409622.pem (1708 bytes)
	I0729 21:04:04.444598  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 21:04:04.478878  787271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19344-733808/.minikube/certs/740962.pem --> /usr/share/ca-certificates/740962.pem (1338 bytes)
	I0729 21:04:04.566568  787271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 21:04:04.789284  787271 ssh_runner.go:195] Run: openssl version
	I0729 21:04:04.809481  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7409622.pem && ln -fs /usr/share/ca-certificates/7409622.pem /etc/ssl/certs/7409622.pem"
	I0729 21:04:04.864701  787271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7409622.pem
	I0729 21:04:04.882527  787271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 20:05 /usr/share/ca-certificates/7409622.pem
	I0729 21:04:04.882680  787271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7409622.pem
	I0729 21:04:04.907867  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7409622.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 21:04:04.945186  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 21:04:05.021868  787271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 21:04:05.049929  787271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 19:25 /usr/share/ca-certificates/minikubeCA.pem
	I0729 21:04:05.050038  787271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 21:04:05.087793  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 21:04:05.161011  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/740962.pem && ln -fs /usr/share/ca-certificates/740962.pem /etc/ssl/certs/740962.pem"
	I0729 21:04:05.221950  787271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/740962.pem
	I0729 21:04:05.235512  787271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 20:05 /usr/share/ca-certificates/740962.pem
	I0729 21:04:05.235600  787271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/740962.pem
	I0729 21:04:05.256720  787271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/740962.pem /etc/ssl/certs/51391683.0"
	I0729 21:04:05.291955  787271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 21:04:05.302857  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 21:04:05.313496  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 21:04:05.325528  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 21:04:05.335586  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 21:04:05.347034  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 21:04:05.357185  787271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 21:04:05.364826  787271 kubeadm.go:392] StartCluster: {Name:pause-913034 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Cl
usterName:pause-913034 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.20 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:04:05.365014  787271 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0729 21:04:05.365085  787271 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 21:04:05.427208  787271 cri.go:89] found id: "5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0"
	I0729 21:04:05.427240  787271 cri.go:89] found id: "7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09"
	I0729 21:04:05.427247  787271 cri.go:89] found id: "303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d"
	I0729 21:04:05.427253  787271 cri.go:89] found id: "1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f"
	I0729 21:04:05.427258  787271 cri.go:89] found id: "768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3"
	I0729 21:04:05.427265  787271 cri.go:89] found id: "7caae98a26079816357a9c885769123c5a6713bbf19187692b3baad5f5096c33"
	I0729 21:04:05.427270  787271 cri.go:89] found id: "b3e07e352746928ab82009d79e3ba223c84ee5b1d3ce51c52abf0825bec80ed7"
	I0729 21:04:05.427276  787271 cri.go:89] found id: "374fc228567b1f4c6da709530c369b05fc39d6a163cb7828b53b57e2534d8381"
	I0729 21:04:05.427280  787271 cri.go:89] found id: "7e70d9004143be715c612dd685880973b487afe564321f47752828c0fd7fabbc"
	I0729 21:04:05.427290  787271 cri.go:89] found id: "3ea61278d913cb9e9c9427b4fc4efc59069812cea6f14a99c70b7856f60ee6b5"
	I0729 21:04:05.427295  787271 cri.go:89] found id: ""
	I0729 21:04:05.427357  787271 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-913034 -n pause-913034
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-913034 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-913034 logs -n 25: (1.32918266s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-832067 ssh cat     | force-systemd-flag-832067 | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-832067          | force-systemd-flag-832067 | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p cert-expiration-461577             | cert-expiration-461577    | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:02 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-148160 sudo           | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:02 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-160077             | running-upgrade-160077    | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p cert-options-768831                | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:03 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-148160 sudo           | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC | 29 Jul 24 21:02 UTC |
	| start   | -p pause-913034 --memory=2048         | pause-913034              | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC | 29 Jul 24 21:03 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-768831 ssh               | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-768831 -- sudo        | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-768831                | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	| start   | -p stopped-upgrade-252364             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-913034                       | pause-913034              | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-252364 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	| start   | -p stopped-upgrade-252364             | stopped-upgrade-252364    | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-252364             | stopped-upgrade-252364    | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	| start   | -p auto-404553 --memory=3072          | auto-404553               | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 21:04:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 21:04:51.203797  788012 out.go:291] Setting OutFile to fd 1 ...
	I0729 21:04:51.204084  788012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:04:51.204094  788012 out.go:304] Setting ErrFile to fd 2...
	I0729 21:04:51.204098  788012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:04:51.204320  788012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 21:04:51.205002  788012 out.go:298] Setting JSON to false
	I0729 21:04:51.206027  788012 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":17238,"bootTime":1722269853,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 21:04:51.206084  788012 start.go:139] virtualization: kvm guest
	I0729 21:04:51.208448  788012 out.go:177] * [auto-404553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 21:04:51.210083  788012 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 21:04:51.210081  788012 notify.go:220] Checking for updates...
	I0729 21:04:51.212513  788012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 21:04:51.213779  788012 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:04:51.215007  788012 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:04:51.216239  788012 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 21:04:51.217433  788012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 21:04:51.219042  788012 config.go:182] Loaded profile config "cert-expiration-461577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:04:51.219141  788012 config.go:182] Loaded profile config "kubernetes-upgrade-171355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 21:04:51.219256  788012 config.go:182] Loaded profile config "pause-913034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:04:51.219337  788012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 21:04:51.259502  788012 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 21:04:51.260726  788012 start.go:297] selected driver: kvm2
	I0729 21:04:51.260741  788012 start.go:901] validating driver "kvm2" against <nil>
	I0729 21:04:51.260754  788012 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 21:04:51.261618  788012 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:04:51.261715  788012 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 21:04:51.277511  788012 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 21:04:51.277572  788012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 21:04:51.277792  788012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:04:51.277818  788012 cni.go:84] Creating CNI manager for ""
	I0729 21:04:51.277826  788012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:04:51.277833  788012 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 21:04:51.277886  788012 start.go:340] cluster config:
	{Name:auto-404553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:04:51.277979  788012 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:04:51.279685  788012 out.go:177] * Starting "auto-404553" primary control-plane node in "auto-404553" cluster
	I0729 21:04:51.281048  788012 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 21:04:51.281091  788012 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 21:04:51.281104  788012 cache.go:56] Caching tarball of preloaded images
	I0729 21:04:51.281193  788012 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 21:04:51.281204  788012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 21:04:51.281308  788012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/auto-404553/config.json ...
	I0729 21:04:51.281333  788012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/auto-404553/config.json: {Name:mk186cdbda2945eb4dae15002f84cc031a1886d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:04:51.281494  788012 start.go:360] acquireMachinesLock for auto-404553: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:04:51.281536  788012 start.go:364] duration metric: took 25.988µs to acquireMachinesLock for "auto-404553"
	I0729 21:04:51.281557  788012 start.go:93] Provisioning new machine with config: &{Name:auto-404553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 21:04:51.281667  788012 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 21:04:49.062886  787271 addons.go:510] duration metric: took 2.925936ms for enable addons: enabled=[]
	I0729 21:04:49.062948  787271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:04:49.237072  787271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 21:04:49.253138  787271 node_ready.go:35] waiting up to 6m0s for node "pause-913034" to be "Ready" ...
	I0729 21:04:49.256432  787271 node_ready.go:49] node "pause-913034" has status "Ready":"True"
	I0729 21:04:49.256465  787271 node_ready.go:38] duration metric: took 3.285682ms for node "pause-913034" to be "Ready" ...
	I0729 21:04:49.256478  787271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:04:49.262371  787271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.268711  787271 pod_ready.go:92] pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.268738  787271 pod_ready.go:81] duration metric: took 6.338278ms for pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.268749  787271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.430226  787271 pod_ready.go:92] pod "etcd-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.430254  787271 pod_ready.go:81] duration metric: took 161.49732ms for pod "etcd-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.430270  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.828879  787271 pod_ready.go:92] pod "kube-apiserver-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.828909  787271 pod_ready.go:81] duration metric: took 398.631181ms for pod "kube-apiserver-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.828923  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.228991  787271 pod_ready.go:92] pod "kube-controller-manager-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:50.229017  787271 pod_ready.go:81] duration metric: took 400.084854ms for pod "kube-controller-manager-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.229031  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45zxr" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.629728  787271 pod_ready.go:92] pod "kube-proxy-45zxr" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:50.629751  787271 pod_ready.go:81] duration metric: took 400.713333ms for pod "kube-proxy-45zxr" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.629761  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:51.028923  787271 pod_ready.go:92] pod "kube-scheduler-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:51.028950  787271 pod_ready.go:81] duration metric: took 399.182321ms for pod "kube-scheduler-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:51.028960  787271 pod_ready.go:38] duration metric: took 1.772467886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:04:51.028977  787271 api_server.go:52] waiting for apiserver process to appear ...
	I0729 21:04:51.029028  787271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:04:51.046387  787271 api_server.go:72] duration metric: took 1.9864563s to wait for apiserver process to appear ...
	I0729 21:04:51.046417  787271 api_server.go:88] waiting for apiserver healthz status ...
	I0729 21:04:51.046441  787271 api_server.go:253] Checking apiserver healthz at https://192.168.61.20:8443/healthz ...
	I0729 21:04:51.051410  787271 api_server.go:279] https://192.168.61.20:8443/healthz returned 200:
	ok
	I0729 21:04:51.052603  787271 api_server.go:141] control plane version: v1.30.3
	I0729 21:04:51.052625  787271 api_server.go:131] duration metric: took 6.200125ms to wait for apiserver health ...
	I0729 21:04:51.052636  787271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 21:04:51.232867  787271 system_pods.go:59] 6 kube-system pods found
	I0729 21:04:51.232908  787271 system_pods.go:61] "coredns-7db6d8ff4d-djrln" [5526db23-d0f1-48ca-bd4e-d87981b47b51] Running
	I0729 21:04:51.232915  787271 system_pods.go:61] "etcd-pause-913034" [81af8f64-5999-41ab-8f6e-539e0db4f628] Running
	I0729 21:04:51.232919  787271 system_pods.go:61] "kube-apiserver-pause-913034" [3c4dd0a3-99f6-4df4-9703-10f6bce2f514] Running
	I0729 21:04:51.232924  787271 system_pods.go:61] "kube-controller-manager-pause-913034" [5a2f6655-600d-4e80-8339-c8d17e241121] Running
	I0729 21:04:51.232929  787271 system_pods.go:61] "kube-proxy-45zxr" [62f09954-bceb-4a05-a703-00b80c49e9bc] Running
	I0729 21:04:51.232967  787271 system_pods.go:61] "kube-scheduler-pause-913034" [7772c8f4-1a50-4678-a7c0-64c4434e56c0] Running
	I0729 21:04:51.232977  787271 system_pods.go:74] duration metric: took 180.334067ms to wait for pod list to return data ...
	I0729 21:04:51.232988  787271 default_sa.go:34] waiting for default service account to be created ...
	I0729 21:04:51.428544  787271 default_sa.go:45] found service account: "default"
	I0729 21:04:51.428568  787271 default_sa.go:55] duration metric: took 195.573469ms for default service account to be created ...
	I0729 21:04:51.428576  787271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 21:04:51.631489  787271 system_pods.go:86] 6 kube-system pods found
	I0729 21:04:51.631520  787271 system_pods.go:89] "coredns-7db6d8ff4d-djrln" [5526db23-d0f1-48ca-bd4e-d87981b47b51] Running
	I0729 21:04:51.631526  787271 system_pods.go:89] "etcd-pause-913034" [81af8f64-5999-41ab-8f6e-539e0db4f628] Running
	I0729 21:04:51.631531  787271 system_pods.go:89] "kube-apiserver-pause-913034" [3c4dd0a3-99f6-4df4-9703-10f6bce2f514] Running
	I0729 21:04:51.631535  787271 system_pods.go:89] "kube-controller-manager-pause-913034" [5a2f6655-600d-4e80-8339-c8d17e241121] Running
	I0729 21:04:51.631539  787271 system_pods.go:89] "kube-proxy-45zxr" [62f09954-bceb-4a05-a703-00b80c49e9bc] Running
	I0729 21:04:51.631542  787271 system_pods.go:89] "kube-scheduler-pause-913034" [7772c8f4-1a50-4678-a7c0-64c4434e56c0] Running
	I0729 21:04:51.631551  787271 system_pods.go:126] duration metric: took 202.967399ms to wait for k8s-apps to be running ...
	I0729 21:04:51.631558  787271 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 21:04:51.631603  787271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 21:04:51.646254  787271 system_svc.go:56] duration metric: took 14.685539ms WaitForService to wait for kubelet
	I0729 21:04:51.646291  787271 kubeadm.go:582] duration metric: took 2.586362971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:04:51.646324  787271 node_conditions.go:102] verifying NodePressure condition ...
	I0729 21:04:51.829383  787271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 21:04:51.829412  787271 node_conditions.go:123] node cpu capacity is 2
	I0729 21:04:51.829427  787271 node_conditions.go:105] duration metric: took 183.095257ms to run NodePressure ...
	I0729 21:04:51.829443  787271 start.go:241] waiting for startup goroutines ...
	I0729 21:04:51.829456  787271 start.go:246] waiting for cluster config update ...
	I0729 21:04:51.829468  787271 start.go:255] writing updated cluster config ...
	I0729 21:04:51.829821  787271 ssh_runner.go:195] Run: rm -f paused
	I0729 21:04:51.885317  787271 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 21:04:51.887679  787271 out.go:177] * Done! kubectl is now configured to use "pause-913034" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.553111731Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=81366ac2-4779-4855-860c-20840d1d7291 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.554539582Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7dd4780a-bded-428c-a581-55c01389b88d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.554970885Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287092554945062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7dd4780a-bded-428c-a581-55c01389b88d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.555636550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e028fc72-726c-4f58-8684-d6aa3c4a4d4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.555711533Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e028fc72-726c-4f58-8684-d6aa3c4a4d4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.555963911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e028fc72-726c-4f58-8684-d6aa3c4a4d4a name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.585902885Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f4e1dc5-a6cf-45f8-a368-a58c628cc8a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.586357074Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-djrln,Uid:5526db23-d0f1-48ca-bd4e-d87981b47b51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044582946413,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T21:03:44.871691165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-913034,Uid:bb9d0d0929c1cc5faa7e9aaed304676a,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044573771970,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.20:8443,kubernetes.io/config.hash: bb9d0d0929c1cc5faa7e9aaed304676a,kubernetes.io/config.seen: 2024-07-29T21:03:31.210322540Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-913034,Uid:cb093dff2f8f2763c3e735334f097b2f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044562990727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cb093dff2f8f2763c3e735334f097b2f,kubernetes.io/config.seen: 2024-07-29T21:03:31.210324921Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&PodSandboxMetadata{Name:kube-proxy-45zxr,Uid:62f09954-bceb-4a05-a703-00b80c49e9bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044557740824,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T21:03:44.768047266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45fccc44ee359022b73fc797
4505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&PodSandboxMetadata{Name:etcd-pause-913034,Uid:8a4ed5a051ff710972d85f61a467cfef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044549484475,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.20:2379,kubernetes.io/config.hash: 8a4ed5a051ff710972d85f61a467cfef,kubernetes.io/config.seen: 2024-07-29T21:03:31.210318141Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-913034,Uid:a3d7fd520617d6aa29f585dbbd93fd9e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044497987962,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a3d7fd520617d6aa29f585dbbd93fd9e,kubernetes.io/config.seen: 2024-07-29T21:03:31.210323991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=8f4e1dc5-a6cf-45f8-a368-a58c628cc8a2 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.587356573Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac9fe70d-74b9-4ead-bcc4-7594af15fb21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.587466232Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac9fe70d-74b9-4ead-bcc4-7594af15fb21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.587760638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac9fe70d-74b9-4ead-bcc4-7594af15fb21 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.601967739Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d47e70ea-0f06-4534-81c6-bff875fa8dda name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.602086902Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d47e70ea-0f06-4534-81c6-bff875fa8dda name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.603078510Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=308d25cd-baa2-4727-9587-aa2a2e065a27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.603571585Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287092603542799,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=308d25cd-baa2-4727-9587-aa2a2e065a27 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.604045938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d24001eb-c6fc-4d77-a1b1-75e206e80938 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.604124916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d24001eb-c6fc-4d77-a1b1-75e206e80938 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.604880990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d24001eb-c6fc-4d77-a1b1-75e206e80938 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.647910439Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce1f328c-a2fa-42f5-ad83-e65d5e02cef2 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.647991581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce1f328c-a2fa-42f5-ad83-e65d5e02cef2 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.649259281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b836406-78cf-45d6-96a8-d19205be5a78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.649638536Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287092649615703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b836406-78cf-45d6-96a8-d19205be5a78 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.650110634Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a5b89b57-f99f-4ca2-89af-be87843c6191 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.650162089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a5b89b57-f99f-4ca2-89af-be87843c6191 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:52 pause-913034 crio[2437]: time="2024-07-29 21:04:52.650476471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a5b89b57-f99f-4ca2-89af-be87843c6191 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	995939ce45b88       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   18 seconds ago       Running             kube-proxy                2                   64e486d764876       kube-proxy-45zxr
	7631b8b65edf4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   23 seconds ago       Running             kube-controller-manager   2                   9cb53e7cd55a0       kube-controller-manager-pause-913034
	69a4081219038       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   23 seconds ago       Running             kube-scheduler            2                   6f1f3243ea910       kube-scheduler-pause-913034
	c0c4b3e419cc6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   23 seconds ago       Running             etcd                      2                   45fccc44ee359       etcd-pause-913034
	722982b05e1b0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   23 seconds ago       Running             kube-apiserver            2                   ae570803b0606       kube-apiserver-pause-913034
	643c95d4748db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   47 seconds ago       Running             coredns                   1                   f1e53f7f6da2c       coredns-7db6d8ff4d-djrln
	cbabc0f984e92       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   47 seconds ago       Exited              kube-proxy                1                   64e486d764876       kube-proxy-45zxr
	5832856fcd232       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   47 seconds ago       Exited              kube-apiserver            1                   ae570803b0606       kube-apiserver-pause-913034
	7c973d38fecef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   47 seconds ago       Exited              etcd                      1                   45fccc44ee359       etcd-pause-913034
	303544245d212       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   47 seconds ago       Exited              kube-scheduler            1                   6f1f3243ea910       kube-scheduler-pause-913034
	1eb0c4e3c63f2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   47 seconds ago       Exited              kube-controller-manager   1                   9cb53e7cd55a0       kube-controller-manager-pause-913034
	768d8e818b193       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ee962aa1517b6       coredns-7db6d8ff4d-djrln
	
	
	==> coredns [643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59373 - 65319 "HINFO IN 3400262441109169624.3639415197000515649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010977445s
	
	
	==> coredns [768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-913034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-913034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=pause-913034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T21_03_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 21:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-913034
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 21:04:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.20
	  Hostname:    pause-913034
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ecd6e4afa4d4e4e957e3f245e06394c
	  System UUID:                2ecd6e4a-fa4d-4e4e-957e-3f245e06394c
	  Boot ID:                    c0ce05dd-122d-4244-a605-722abf0e3a9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-djrln                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     68s
	  kube-system                 etcd-pause-913034                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         81s
	  kube-system                 kube-apiserver-pause-913034             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-pause-913034    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-45zxr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-pause-913034             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  87s (x8 over 88s)  kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 88s)  kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x7 over 88s)  kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  NodeReady                80s                kubelet          Node pause-913034 status is now: NodeReady
	  Normal  RegisteredNode           69s                node-controller  Node pause-913034 event: Registered Node pause-913034 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                 node-controller  Node pause-913034 event: Registered Node pause-913034 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.546111] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.069928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052719] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.153594] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.122840] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.286666] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.167207] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.385304] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.078202] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.006061] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.071968] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.799533] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.168409] kauditd_printk_skb: 21 callbacks suppressed
	[  +8.409878] kauditd_printk_skb: 89 callbacks suppressed
	[Jul29 21:04] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.158121] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.192305] systemd-fstab-generator[2381]: Ignoring "noauto" option for root device
	[  +0.124040] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +0.293754] systemd-fstab-generator[2422]: Ignoring "noauto" option for root device
	[  +0.950568] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +5.236193] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.044637] systemd-fstab-generator[3393]: Ignoring "noauto" option for root device
	[  +5.833518] kauditd_printk_skb: 41 callbacks suppressed
	[ +15.004878] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	
	
	==> etcd [7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09] <==
	{"level":"info","ts":"2024-07-29T21:04:05.750175Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T21:04:07.398553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec received MsgPreVoteResp from fc7eaa6ede108dec at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec received MsgVoteResp from fc7eaa6ede108dec at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became leader at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc7eaa6ede108dec elected leader fc7eaa6ede108dec at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.400957Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc7eaa6ede108dec","local-member-attributes":"{Name:pause-913034 ClientURLs:[https://192.168.61.20:2379]}","request-path":"/0/members/fc7eaa6ede108dec/attributes","cluster-id":"d382f5c9d11d69f0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T21:04:07.401122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:04:07.40117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:04:07.401543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T21:04:07.401636Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T21:04:07.403382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.20:2379"}
	{"level":"info","ts":"2024-07-29T21:04:07.403973Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T21:04:25.961252Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T21:04:25.96133Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-913034","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.20:2380"],"advertise-client-urls":["https://192.168.61.20:2379"]}
	{"level":"warn","ts":"2024-07-29T21:04:25.961429Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.20:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961469Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.20:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961591Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T21:04:25.963414Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc7eaa6ede108dec","current-leader-member-id":"fc7eaa6ede108dec"}
	{"level":"info","ts":"2024-07-29T21:04:25.968564Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.20:2380"}
	{"level":"info","ts":"2024-07-29T21:04:25.968841Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.20:2380"}
	{"level":"info","ts":"2024-07-29T21:04:25.96886Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-913034","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.20:2380"],"advertise-client-urls":["https://192.168.61.20:2379"]}
	
	
	==> etcd [c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f] <==
	{"level":"warn","ts":"2024-07-29T21:04:33.795942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.267440147s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-913034\" ","response":"range_response_count:1 size:5653"}
	{"level":"info","ts":"2024-07-29T21:04:33.795991Z","caller":"traceutil/trace.go:171","msg":"trace[330872765] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-913034; range_end:; response_count:1; response_revision:481; }","duration":"1.267520748s","start":"2024-07-29T21:04:32.528461Z","end":"2024-07-29T21:04:33.795982Z","steps":["trace[330872765] 'agreement among raft nodes before linearized reading'  (duration: 1.267358646s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.528448Z","time spent":"1.267565233s","remote":"127.0.0.1:41342","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5677,"request content":"key:\"/registry/pods/kube-system/etcd-pause-913034\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.796357Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.4694Z","time spent":"1.326525931s","remote":"127.0.0.1:41428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-913034\" mod_revision:423 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-913034\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-913034\" > >"}
	{"level":"warn","ts":"2024-07-29T21:04:33.796469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.725991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T21:04:33.796523Z","caller":"traceutil/trace.go:171","msg":"trace[1348443076] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:481; }","duration":"420.831637ms","start":"2024-07-29T21:04:33.37568Z","end":"2024-07-29T21:04:33.796512Z","steps":["trace[1348443076] 'agreement among raft nodes before linearized reading'  (duration: 420.771161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796547Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.375662Z","time spent":"420.879017ms","remote":"127.0.0.1:41136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.796377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.223699892s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-07-29T21:04:33.796754Z","caller":"traceutil/trace.go:171","msg":"trace[1377317068] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:481; }","duration":"1.224090389s","start":"2024-07-29T21:04:32.572652Z","end":"2024-07-29T21:04:33.796742Z","steps":["trace[1377317068] 'agreement among raft nodes before linearized reading'  (duration: 1.223686919s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.57264Z","time spent":"1.224165413s","remote":"127.0.0.1:41356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":203,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.797014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"557.408053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:66 size:59397"}
	{"level":"info","ts":"2024-07-29T21:04:33.797059Z","caller":"traceutil/trace.go:171","msg":"trace[1913177609] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:66; response_revision:481; }","duration":"557.475682ms","start":"2024-07-29T21:04:33.239574Z","end":"2024-07-29T21:04:33.79705Z","steps":["trace[1913177609] 'agreement among raft nodes before linearized reading'  (duration: 557.097559ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.797084Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.239561Z","time spent":"557.516352ms","remote":"127.0.0.1:41504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":66,"response size":59421,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"info","ts":"2024-07-29T21:04:33.796488Z","caller":"traceutil/trace.go:171","msg":"trace[847856133] transaction","detail":"{read_only:false; number_of_response:0; response_revision:480; }","duration":"1.340101112s","start":"2024-07-29T21:04:32.456374Z","end":"2024-07-29T21:04:33.796475Z","steps":["trace[847856133] 'process raft request'  (duration: 951.343764ms)","trace[847856133] 'compare'  (duration: 387.125686ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T21:04:33.803508Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.456359Z","time spent":"1.347108548s","remote":"127.0.0.1:41332","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/minions/pause-913034\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-913034\" value_size:3854 >> failure:<>"}
	{"level":"warn","ts":"2024-07-29T21:04:33.797296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"560.317766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"warn","ts":"2024-07-29T21:04:33.797686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.225010013s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-29T21:04:33.805376Z","caller":"traceutil/trace.go:171","msg":"trace[1560037222] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:481; }","duration":"1.232749938s","start":"2024-07-29T21:04:32.572609Z","end":"2024-07-29T21:04:33.805359Z","steps":["trace[1560037222] 'agreement among raft nodes before linearized reading'  (duration: 1.225046866s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.805641Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.572591Z","time spent":"1.233033941s","remote":"127.0.0.1:41356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":209,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	{"level":"info","ts":"2024-07-29T21:04:33.806072Z","caller":"traceutil/trace.go:171","msg":"trace[23262103] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:481; }","duration":"569.112754ms","start":"2024-07-29T21:04:33.236948Z","end":"2024-07-29T21:04:33.806061Z","steps":["trace[23262103] 'agreement among raft nodes before linearized reading'  (duration: 560.31377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.808289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.23689Z","time spent":"571.382448ms","remote":"127.0.0.1:41528","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"info","ts":"2024-07-29T21:04:34.028572Z","caller":"traceutil/trace.go:171","msg":"trace[2071302045] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"147.208145ms","start":"2024-07-29T21:04:33.881344Z","end":"2024-07-29T21:04:34.028552Z","steps":["trace[2071302045] 'read index received'  (duration: 119.48094ms)","trace[2071302045] 'applied index is now lower than readState.Index'  (duration: 27.72651ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T21:04:34.028849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.481416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:monitoring\" ","response":"range_response_count:1 size:634"}
	{"level":"info","ts":"2024-07-29T21:04:34.028955Z","caller":"traceutil/trace.go:171","msg":"trace[845426345] range","detail":"{range_begin:/registry/clusterroles/system:monitoring; range_end:; response_count:1; response_revision:486; }","duration":"147.603721ms","start":"2024-07-29T21:04:33.881341Z","end":"2024-07-29T21:04:34.028945Z","steps":["trace[845426345] 'agreement among raft nodes before linearized reading'  (duration: 147.354049ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T21:04:34.029336Z","caller":"traceutil/trace.go:171","msg":"trace[1413533679] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"148.867902ms","start":"2024-07-29T21:04:33.880454Z","end":"2024-07-29T21:04:34.029322Z","steps":["trace[1413533679] 'process raft request'  (duration: 120.424889ms)","trace[1413533679] 'compare'  (duration: 27.597986ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:04:53 up 1 min,  0 users,  load average: 0.79, 0.33, 0.12
	Linux pause-913034 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0] <==
	I0729 21:04:15.902983       1 controller.go:167] Shutting down OpenAPI controller
	I0729 21:04:15.902998       1 naming_controller.go:302] Shutting down NamingConditionController
	I0729 21:04:15.903011       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0729 21:04:15.903038       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0729 21:04:15.903064       1 controller.go:129] Ending legacy_token_tracking_controller
	I0729 21:04:15.903069       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0729 21:04:15.903082       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0729 21:04:15.903096       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0729 21:04:15.903106       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0729 21:04:15.904019       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0729 21:04:15.904282       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:04:15.904646       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:04:15.904729       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 21:04:15.904766       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 21:04:15.904788       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:04:15.904824       1 controller.go:157] Shutting down quota evaluator
	I0729 21:04:15.904847       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.905119       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 21:04:15.905159       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 21:04:15.908327       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 21:04:15.908341       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908604       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:04:15.908737       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908842       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908847       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a] <==
	I0729 21:04:33.805474       1 trace.go:236] Trace[977142255]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:28c1bfdd-aae0-479b-a8ad-858f61e245c9,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:cluster,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:LIST (29-Jul-2024 21:04:33.238) (total time: 566ms):
	Trace[977142255]: ["List(recursive=true) etcd3" audit-id:28c1bfdd-aae0-479b-a8ad-858f61e245c9,key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:0,continue: 566ms (21:04:33.239)]
	Trace[977142255]: [566.466149ms] [566.466149ms] END
	I0729 21:04:33.809652       1 trace.go:236] Trace[1360136650]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a943e3ec-f126-4150-98ed-9b3079d4b7ca,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (29-Jul-2024 21:04:33.236) (total time: 573ms):
	Trace[1360136650]: ---"About to write a response" 573ms (21:04:33.809)
	Trace[1360136650]: [573.475211ms] [573.475211ms] END
	I0729 21:04:33.815784       1 trace.go:236] Trace[1203678473]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9d41f055-c25f-4621-b3fb-64f29f533d35,client:192.168.61.20,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.571) (total time: 1244ms):
	Trace[1203678473]: ---"watchCache locked acquired" 1236ms (21:04:33.807)
	Trace[1203678473]: [1.244171453s] [1.244171453s] END
	I0729 21:04:33.817122       1 trace.go:236] Trace[659771155]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01ba8e6f-c87f-4c96-8e34-1fc37e0bf66b,client:192.168.61.20,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.354) (total time: 1462ms):
	Trace[659771155]: ["Create etcd3" audit-id:01ba8e6f-c87f-4c96-8e34-1fc37e0bf66b,key:/minions/pause-913034,type:*core.Node,resource:nodes 1361ms (21:04:32.455)
	Trace[659771155]:  ---"Txn call succeeded" 1352ms (21:04:33.808)]
	Trace[659771155]: [1.462896691s] [1.462896691s] END
	I0729 21:04:33.821703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 21:04:33.823510       1 trace.go:236] Trace[1842723628]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0ec625c2-2892-4962-b014-6cc8dc18ea0a,client:192.168.61.20,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.571) (total time: 1251ms):
	Trace[1842723628]: ---"watchCache locked acquired" 1247ms (21:04:33.819)
	Trace[1842723628]: [1.251703138s] [1.251703138s] END
	W0729 21:04:34.493540       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.20]
	I0729 21:04:34.495136       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 21:04:34.502468       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 21:04:34.795322       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 21:04:34.810690       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 21:04:34.853325       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 21:04:34.884937       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 21:04:34.893509       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f] <==
	I0729 21:04:10.989068       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989166       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989268       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.995066       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0729 21:04:10.995166       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0729 21:04:10.995482       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0729 21:04:10.995645       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0729 21:04:10.995675       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0729 21:04:11.009156       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0729 21:04:11.009434       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0729 21:04:11.009706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0729 21:04:11.012325       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0729 21:04:11.012534       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0729 21:04:11.013285       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0729 21:04:11.014818       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0729 21:04:11.014949       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0729 21:04:11.014934       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0729 21:04:11.015283       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0729 21:04:11.022359       1 shared_informer.go:320] Caches are synced for tokens
	W0729 21:04:21.018548       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:21.519920       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:22.521683       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:24.527196       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	E0729 21:04:24.527403       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.61.20:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-controller-manager [7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec] <==
	I0729 21:04:46.328173       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-913034"
	I0729 21:04:46.328294       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 21:04:46.332418       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 21:04:46.342682       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 21:04:46.345171       1 shared_informer.go:320] Caches are synced for GC
	I0729 21:04:46.346268       1 shared_informer.go:320] Caches are synced for job
	I0729 21:04:46.350736       1 shared_informer.go:320] Caches are synced for disruption
	I0729 21:04:46.360483       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 21:04:46.360554       1 shared_informer.go:320] Caches are synced for deployment
	I0729 21:04:46.363866       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 21:04:46.365595       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 21:04:46.378298       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 21:04:46.387185       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 21:04:46.388477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 21:04:46.388517       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 21:04:46.396900       1 shared_informer.go:320] Caches are synced for HPA
	I0729 21:04:46.401384       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 21:04:46.411330       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 21:04:46.411386       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 21:04:46.411497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.133µs"
	I0729 21:04:46.416273       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 21:04:46.421669       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 21:04:46.841509       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:04:46.861125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:04:46.861158       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f] <==
	I0729 21:04:34.263133       1 server_linux.go:69] "Using iptables proxy"
	I0729 21:04:34.275768       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.20"]
	I0729 21:04:34.330867       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 21:04:34.330936       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 21:04:34.330959       1 server_linux.go:165] "Using iptables Proxier"
	I0729 21:04:34.333937       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 21:04:34.334166       1 server.go:872] "Version info" version="v1.30.3"
	I0729 21:04:34.334518       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:34.336057       1 config.go:192] "Starting service config controller"
	I0729 21:04:34.336103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 21:04:34.336136       1 config.go:101] "Starting endpoint slice config controller"
	I0729 21:04:34.336157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 21:04:34.336821       1 config.go:319] "Starting node config controller"
	I0729 21:04:34.336856       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 21:04:34.436778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 21:04:34.436939       1 shared_informer.go:320] Caches are synced for service config
	I0729 21:04:34.437033       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de] <==
	I0729 21:04:06.352489       1 server_linux.go:69] "Using iptables proxy"
	I0729 21:04:08.999689       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.20"]
	I0729 21:04:09.036168       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 21:04:09.036279       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 21:04:09.036301       1 server_linux.go:165] "Using iptables Proxier"
	I0729 21:04:09.038706       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 21:04:09.038911       1 server.go:872] "Version info" version="v1.30.3"
	I0729 21:04:09.038923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:09.040050       1 config.go:192] "Starting service config controller"
	I0729 21:04:09.040066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 21:04:09.040093       1 config.go:101] "Starting endpoint slice config controller"
	I0729 21:04:09.040097       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 21:04:09.040720       1 config.go:319] "Starting node config controller"
	I0729 21:04:09.040772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 21:04:09.141128       1 shared_informer.go:320] Caches are synced for node config
	I0729 21:04:09.141170       1 shared_informer.go:320] Caches are synced for service config
	I0729 21:04:09.141240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d] <==
	I0729 21:04:06.669057       1 serving.go:380] Generated self-signed cert in-memory
	W0729 21:04:08.929119       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 21:04:08.929378       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 21:04:08.929500       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:04:08.929595       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:04:08.977964       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 21:04:08.978951       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:08.983050       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 21:04:08.983947       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:04:08.988524       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:08.983963       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 21:04:09.089645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:26.095820       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 21:04:26.096477       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05] <==
	I0729 21:04:29.553193       1 serving.go:380] Generated self-signed cert in-memory
	W0729 21:04:32.278042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 21:04:32.278076       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 21:04:32.278133       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:04:32.278140       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:04:32.316648       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 21:04:32.316680       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:32.318112       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 21:04:32.320294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:04:32.320367       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:32.320395       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 21:04:32.421286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.788360    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3d7fd520617d6aa29f585dbbd93fd9e-ca-certs\") pod \"kube-controller-manager-pause-913034\" (UID: \"a3d7fd520617d6aa29f585dbbd93fd9e\") " pod="kube-system/kube-controller-manager-pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.788376    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb093dff2f8f2763c3e735334f097b2f-kubeconfig\") pod \"kube-scheduler-pause-913034\" (UID: \"cb093dff2f8f2763c3e735334f097b2f\") " pod="kube-system/kube-scheduler-pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.839657    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: E0729 21:04:28.840549    3400 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.20:8443: connect: connection refused" node="pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.991240    3400 scope.go:117] "RemoveContainer" containerID="7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.992792    3400 scope.go:117] "RemoveContainer" containerID="5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.994043    3400 scope.go:117] "RemoveContainer" containerID="1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.996390    3400 scope.go:117] "RemoveContainer" containerID="303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.149648    3400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-913034?timeout=10s\": dial tcp 192.168.61.20:8443: connect: connection refused" interval="800ms"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: I0729 21:04:29.243001    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.247855    3400 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.20:8443: connect: connection refused" node="pause-913034"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: W0729 21:04:29.488515    3400 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.20:8443: connect: connection refused
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.488608    3400 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.20:8443: connect: connection refused
	Jul 29 21:04:30 pause-913034 kubelet[3400]: I0729 21:04:30.049981    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.515345    3400 apiserver.go:52] "Watching apiserver"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.520095    3400 topology_manager.go:215] "Topology Admit Handler" podUID="62f09954-bceb-4a05-a703-00b80c49e9bc" podNamespace="kube-system" podName="kube-proxy-45zxr"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.520450    3400 topology_manager.go:215] "Topology Admit Handler" podUID="5526db23-d0f1-48ca-bd4e-d87981b47b51" podNamespace="kube-system" podName="coredns-7db6d8ff4d-djrln"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.544684    3400 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.570184    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62f09954-bceb-4a05-a703-00b80c49e9bc-lib-modules\") pod \"kube-proxy-45zxr\" (UID: \"62f09954-bceb-4a05-a703-00b80c49e9bc\") " pod="kube-system/kube-proxy-45zxr"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.570438    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62f09954-bceb-4a05-a703-00b80c49e9bc-xtables-lock\") pod \"kube-proxy-45zxr\" (UID: \"62f09954-bceb-4a05-a703-00b80c49e9bc\") " pod="kube-system/kube-proxy-45zxr"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.831110    3400 kubelet_node_status.go:112] "Node was previously registered" node="pause-913034"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.831816    3400 kubelet_node_status.go:76] "Successfully registered node" node="pause-913034"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.836898    3400 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.838642    3400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 21:04:34 pause-913034 kubelet[3400]: I0729 21:04:34.021816    3400 scope.go:117] "RemoveContainer" containerID="cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-913034 -n pause-913034
helpers_test.go:261: (dbg) Run:  kubectl --context pause-913034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-913034 -n pause-913034
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-913034 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-913034 logs -n 25: (1.290344127s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-832067 ssh cat     | force-systemd-flag-832067 | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-832067          | force-systemd-flag-832067 | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p cert-expiration-461577             | cert-expiration-461577    | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:02 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-148160 sudo           | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:02 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p running-upgrade-160077             | running-upgrade-160077    | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:01 UTC |
	| start   | -p cert-options-768831                | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:01 UTC | 29 Jul 24 21:03 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-148160 sudo           | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-148160                | NoKubernetes-148160       | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC | 29 Jul 24 21:02 UTC |
	| start   | -p pause-913034 --memory=2048         | pause-913034              | jenkins | v1.33.1 | 29 Jul 24 21:02 UTC | 29 Jul 24 21:03 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-768831 ssh               | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-768831 -- sudo        | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-768831                | cert-options-768831       | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	| start   | -p stopped-upgrade-252364             | minikube                  | jenkins | v1.26.0 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200 --vm-driver=kvm2        |                           |         |         |                     |                     |
	|         |  --container-runtime=crio             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:03 UTC |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-913034                       | pause-913034              | jenkins | v1.33.1 | 29 Jul 24 21:03 UTC | 29 Jul 24 21:04 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-252364 stop           | minikube                  | jenkins | v1.26.0 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	| start   | -p stopped-upgrade-252364             | stopped-upgrade-252364    | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-171355          | kubernetes-upgrade-171355 | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-252364             | stopped-upgrade-252364    | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC | 29 Jul 24 21:04 UTC |
	| start   | -p auto-404553 --memory=3072          | auto-404553               | jenkins | v1.33.1 | 29 Jul 24 21:04 UTC |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 21:04:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 21:04:51.203797  788012 out.go:291] Setting OutFile to fd 1 ...
	I0729 21:04:51.204084  788012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:04:51.204094  788012 out.go:304] Setting ErrFile to fd 2...
	I0729 21:04:51.204098  788012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 21:04:51.204320  788012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 21:04:51.205002  788012 out.go:298] Setting JSON to false
	I0729 21:04:51.206027  788012 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":17238,"bootTime":1722269853,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 21:04:51.206084  788012 start.go:139] virtualization: kvm guest
	I0729 21:04:51.208448  788012 out.go:177] * [auto-404553] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 21:04:51.210083  788012 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 21:04:51.210081  788012 notify.go:220] Checking for updates...
	I0729 21:04:51.212513  788012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 21:04:51.213779  788012 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 21:04:51.215007  788012 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 21:04:51.216239  788012 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 21:04:51.217433  788012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 21:04:51.219042  788012 config.go:182] Loaded profile config "cert-expiration-461577": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:04:51.219141  788012 config.go:182] Loaded profile config "kubernetes-upgrade-171355": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.0-beta.0
	I0729 21:04:51.219256  788012 config.go:182] Loaded profile config "pause-913034": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 21:04:51.219337  788012 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 21:04:51.259502  788012 out.go:177] * Using the kvm2 driver based on user configuration
	I0729 21:04:51.260726  788012 start.go:297] selected driver: kvm2
	I0729 21:04:51.260741  788012 start.go:901] validating driver "kvm2" against <nil>
	I0729 21:04:51.260754  788012 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 21:04:51.261618  788012 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:04:51.261715  788012 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 21:04:51.277511  788012 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 21:04:51.277572  788012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 21:04:51.277792  788012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:04:51.277818  788012 cni.go:84] Creating CNI manager for ""
	I0729 21:04:51.277826  788012 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 21:04:51.277833  788012 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 21:04:51.277886  788012 start.go:340] cluster config:
	{Name:auto-404553 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 21:04:51.277979  788012 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 21:04:51.279685  788012 out.go:177] * Starting "auto-404553" primary control-plane node in "auto-404553" cluster
	I0729 21:04:51.281048  788012 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 21:04:51.281091  788012 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 21:04:51.281104  788012 cache.go:56] Caching tarball of preloaded images
	I0729 21:04:51.281193  788012 preload.go:172] Found /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0729 21:04:51.281204  788012 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on crio
	I0729 21:04:51.281308  788012 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/auto-404553/config.json ...
	I0729 21:04:51.281333  788012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/auto-404553/config.json: {Name:mk186cdbda2945eb4dae15002f84cc031a1886d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 21:04:51.281494  788012 start.go:360] acquireMachinesLock for auto-404553: {Name:mke799cc8ba86d25c6b1dc749cde02c25e191395 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0729 21:04:51.281536  788012 start.go:364] duration metric: took 25.988µs to acquireMachinesLock for "auto-404553"
	I0729 21:04:51.281557  788012 start.go:93] Provisioning new machine with config: &{Name:auto-404553 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:auto-404553 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0729 21:04:51.281667  788012 start.go:125] createHost starting for "" (driver="kvm2")
	I0729 21:04:49.062886  787271 addons.go:510] duration metric: took 2.925936ms for enable addons: enabled=[]
	I0729 21:04:49.062948  787271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 21:04:49.237072  787271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 21:04:49.253138  787271 node_ready.go:35] waiting up to 6m0s for node "pause-913034" to be "Ready" ...
	I0729 21:04:49.256432  787271 node_ready.go:49] node "pause-913034" has status "Ready":"True"
	I0729 21:04:49.256465  787271 node_ready.go:38] duration metric: took 3.285682ms for node "pause-913034" to be "Ready" ...
	I0729 21:04:49.256478  787271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:04:49.262371  787271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.268711  787271 pod_ready.go:92] pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.268738  787271 pod_ready.go:81] duration metric: took 6.338278ms for pod "coredns-7db6d8ff4d-djrln" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.268749  787271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.430226  787271 pod_ready.go:92] pod "etcd-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.430254  787271 pod_ready.go:81] duration metric: took 161.49732ms for pod "etcd-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.430270  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.828879  787271 pod_ready.go:92] pod "kube-apiserver-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:49.828909  787271 pod_ready.go:81] duration metric: took 398.631181ms for pod "kube-apiserver-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:49.828923  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.228991  787271 pod_ready.go:92] pod "kube-controller-manager-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:50.229017  787271 pod_ready.go:81] duration metric: took 400.084854ms for pod "kube-controller-manager-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.229031  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-45zxr" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.629728  787271 pod_ready.go:92] pod "kube-proxy-45zxr" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:50.629751  787271 pod_ready.go:81] duration metric: took 400.713333ms for pod "kube-proxy-45zxr" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:50.629761  787271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:51.028923  787271 pod_ready.go:92] pod "kube-scheduler-pause-913034" in "kube-system" namespace has status "Ready":"True"
	I0729 21:04:51.028950  787271 pod_ready.go:81] duration metric: took 399.182321ms for pod "kube-scheduler-pause-913034" in "kube-system" namespace to be "Ready" ...
	I0729 21:04:51.028960  787271 pod_ready.go:38] duration metric: took 1.772467886s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 21:04:51.028977  787271 api_server.go:52] waiting for apiserver process to appear ...
	I0729 21:04:51.029028  787271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 21:04:51.046387  787271 api_server.go:72] duration metric: took 1.9864563s to wait for apiserver process to appear ...
	I0729 21:04:51.046417  787271 api_server.go:88] waiting for apiserver healthz status ...
	I0729 21:04:51.046441  787271 api_server.go:253] Checking apiserver healthz at https://192.168.61.20:8443/healthz ...
	I0729 21:04:51.051410  787271 api_server.go:279] https://192.168.61.20:8443/healthz returned 200:
	ok
	I0729 21:04:51.052603  787271 api_server.go:141] control plane version: v1.30.3
	I0729 21:04:51.052625  787271 api_server.go:131] duration metric: took 6.200125ms to wait for apiserver health ...
	I0729 21:04:51.052636  787271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 21:04:51.232867  787271 system_pods.go:59] 6 kube-system pods found
	I0729 21:04:51.232908  787271 system_pods.go:61] "coredns-7db6d8ff4d-djrln" [5526db23-d0f1-48ca-bd4e-d87981b47b51] Running
	I0729 21:04:51.232915  787271 system_pods.go:61] "etcd-pause-913034" [81af8f64-5999-41ab-8f6e-539e0db4f628] Running
	I0729 21:04:51.232919  787271 system_pods.go:61] "kube-apiserver-pause-913034" [3c4dd0a3-99f6-4df4-9703-10f6bce2f514] Running
	I0729 21:04:51.232924  787271 system_pods.go:61] "kube-controller-manager-pause-913034" [5a2f6655-600d-4e80-8339-c8d17e241121] Running
	I0729 21:04:51.232929  787271 system_pods.go:61] "kube-proxy-45zxr" [62f09954-bceb-4a05-a703-00b80c49e9bc] Running
	I0729 21:04:51.232967  787271 system_pods.go:61] "kube-scheduler-pause-913034" [7772c8f4-1a50-4678-a7c0-64c4434e56c0] Running
	I0729 21:04:51.232977  787271 system_pods.go:74] duration metric: took 180.334067ms to wait for pod list to return data ...
	I0729 21:04:51.232988  787271 default_sa.go:34] waiting for default service account to be created ...
	I0729 21:04:51.428544  787271 default_sa.go:45] found service account: "default"
	I0729 21:04:51.428568  787271 default_sa.go:55] duration metric: took 195.573469ms for default service account to be created ...
	I0729 21:04:51.428576  787271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 21:04:51.631489  787271 system_pods.go:86] 6 kube-system pods found
	I0729 21:04:51.631520  787271 system_pods.go:89] "coredns-7db6d8ff4d-djrln" [5526db23-d0f1-48ca-bd4e-d87981b47b51] Running
	I0729 21:04:51.631526  787271 system_pods.go:89] "etcd-pause-913034" [81af8f64-5999-41ab-8f6e-539e0db4f628] Running
	I0729 21:04:51.631531  787271 system_pods.go:89] "kube-apiserver-pause-913034" [3c4dd0a3-99f6-4df4-9703-10f6bce2f514] Running
	I0729 21:04:51.631535  787271 system_pods.go:89] "kube-controller-manager-pause-913034" [5a2f6655-600d-4e80-8339-c8d17e241121] Running
	I0729 21:04:51.631539  787271 system_pods.go:89] "kube-proxy-45zxr" [62f09954-bceb-4a05-a703-00b80c49e9bc] Running
	I0729 21:04:51.631542  787271 system_pods.go:89] "kube-scheduler-pause-913034" [7772c8f4-1a50-4678-a7c0-64c4434e56c0] Running
	I0729 21:04:51.631551  787271 system_pods.go:126] duration metric: took 202.967399ms to wait for k8s-apps to be running ...
	I0729 21:04:51.631558  787271 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 21:04:51.631603  787271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 21:04:51.646254  787271 system_svc.go:56] duration metric: took 14.685539ms WaitForService to wait for kubelet
	I0729 21:04:51.646291  787271 kubeadm.go:582] duration metric: took 2.586362971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 21:04:51.646324  787271 node_conditions.go:102] verifying NodePressure condition ...
	I0729 21:04:51.829383  787271 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0729 21:04:51.829412  787271 node_conditions.go:123] node cpu capacity is 2
	I0729 21:04:51.829427  787271 node_conditions.go:105] duration metric: took 183.095257ms to run NodePressure ...
	I0729 21:04:51.829443  787271 start.go:241] waiting for startup goroutines ...
	I0729 21:04:51.829456  787271 start.go:246] waiting for cluster config update ...
	I0729 21:04:51.829468  787271 start.go:255] writing updated cluster config ...
	I0729 21:04:51.829821  787271 ssh_runner.go:195] Run: rm -f paused
	I0729 21:04:51.885317  787271 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 21:04:51.887679  787271 out.go:177] * Done! kubectl is now configured to use "pause-913034" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.514793372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35c436c6-8769-4ef3-8ef9-a43760c9c237 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.516052649Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bcb7cb8-b06b-401e-b777-5b535b1939f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.516562488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287094516537583,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bcb7cb8-b06b-401e-b777-5b535b1939f2 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.517291779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ddcf1a53-f4e0-4cdf-a3f5-87aa341baeef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.517360332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ddcf1a53-f4e0-4cdf-a3f5-87aa341baeef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.517947446Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ddcf1a53-f4e0-4cdf-a3f5-87aa341baeef name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.568435596Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3921052a-f38d-4021-a9eb-2230f63f3e54 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.568530275Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3921052a-f38d-4021-a9eb-2230f63f3e54 name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.569569690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=92b54699-caeb-484d-82a0-8141274a100d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.570083603Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287094570056113,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=92b54699-caeb-484d-82a0-8141274a100d name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.570582753Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3affa4cb-c4f8-43b2-88eb-b1e7e5735154 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.570649379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3affa4cb-c4f8-43b2-88eb-b1e7e5735154 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.570907620Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3affa4cb-c4f8-43b2-88eb-b1e7e5735154 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.585168386Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b9bc6276-9f4b-4f8d-9c37-c78f113b69c7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.585485662Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&PodSandboxMetadata{Name:coredns-7db6d8ff4d-djrln,Uid:5526db23-d0f1-48ca-bd4e-d87981b47b51,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044582946413,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,k8s-app: kube-dns,pod-template-hash: 7db6d8ff4d,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T21:03:44.871691165Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-913034,Uid:bb9d0d0929c1cc5faa7e9aaed304676a,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044573771970,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.20:8443,kubernetes.io/config.hash: bb9d0d0929c1cc5faa7e9aaed304676a,kubernetes.io/config.seen: 2024-07-29T21:03:31.210322540Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-913034,Uid:cb093dff2f8f2763c3e735334f097b2f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044562990727,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.na
mespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cb093dff2f8f2763c3e735334f097b2f,kubernetes.io/config.seen: 2024-07-29T21:03:31.210324921Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&PodSandboxMetadata{Name:kube-proxy-45zxr,Uid:62f09954-bceb-4a05-a703-00b80c49e9bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044557740824,Labels:map[string]string{controller-revision-hash: 5bbc78d4f8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-07-29T21:03:44.768047266Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:45fccc44ee359022b73fc797
4505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&PodSandboxMetadata{Name:etcd-pause-913034,Uid:8a4ed5a051ff710972d85f61a467cfef,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044549484475,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.20:2379,kubernetes.io/config.hash: 8a4ed5a051ff710972d85f61a467cfef,kubernetes.io/config.seen: 2024-07-29T21:03:31.210318141Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-913034,Uid:a3d7fd520617d6aa29f585dbbd93fd9e,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1722287044497987962,Labels:map[string]strin
g{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: a3d7fd520617d6aa29f585dbbd93fd9e,kubernetes.io/config.seen: 2024-07-29T21:03:31.210323991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b9bc6276-9f4b-4f8d-9c37-c78f113b69c7 name=/runtime.v1.RuntimeService/ListPodSandbox
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.586177313Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6037be72-f8a8-4a4d-aade-75e91b545239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.586293268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6037be72-f8a8-4a4d-aade-75e91b545239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.587617264Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6037be72-f8a8-4a4d-aade-75e91b545239 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.618242945Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a9724a58-819b-4506-885c-556c4c1045da name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.618330728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a9724a58-819b-4506-885c-556c4c1045da name=/runtime.v1.RuntimeService/Version
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.619740832Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d2dffa31-c412-4883-aac0-1a432583bd18 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.620398091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1722287094620371037,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:124365,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2dffa31-c412-4883-aac0-1a432583bd18 name=/runtime.v1.ImageService/ImageFsInfo
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.621024565Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=985e10cb-11e1-4e3c-8eda-5ab35cd72244 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.621162643Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=985e10cb-11e1-4e3c-8eda-5ab35cd72244 name=/runtime.v1.RuntimeService/ListContainers
	Jul 29 21:04:54 pause-913034 crio[2437]: time="2024-07-29 21:04:54.621513043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_RUNNING,CreatedAt:1722287074041725194,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbace3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_RUNNING,CreatedAt:1722287069030923831,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.containe
r.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_RUNNING,CreatedAt:1722287069057877087,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes
.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_RUNNING,CreatedAt:1722287069045703756,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_RUNNING,CreatedAt:1722287069015416396,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb,PodSandboxId:f1e53f7f6da2c20451b0b74b0d60bb47338cd7a5ecb4027db5d06d51c5e140ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1722287045532930527,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de,PodSandboxId:64e486d764876a7365a999e68ea1fdeaa272d354a4f279ccebadcc6abf1f014e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1,State:CONTAINER_EXITED,CreatedAt:1722287045087615947,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-45zxr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62f09954-bceb-4a05-a703-00b80c49e9bc,},Annotations:map[string]string{io.kubernetes.container.hash: f4fbac
e3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0,PodSandboxId:ae570803b06061f9532e12a4a597e48dbdd7538c1f9f09905095d66d7637e1b8,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d,State:CONTAINER_EXITED,CreatedAt:1722287044991106627,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb9d0d0929c1cc5faa7e9aaed304676a,},Annotations:map[string]string{io.kubernetes.container.hash: bce4469f,io.kubernetes.co
ntainer.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09,PodSandboxId:45fccc44ee359022b73fc7974505f90cfa0be139eeed54583bcb8cfd9a1fb96b,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,State:CONTAINER_EXITED,CreatedAt:1722287044980948514,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a4ed5a051ff710972d85f61a467cfef,},Annotations:map[string]string{io.kubernetes.container.hash: 18340588,io.kubernetes.container.restartCount: 1,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d,PodSandboxId:6f1f3243ea9108c91c8dba5c6736132b9261dbd14bdb06648e15862d0f6b71a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2,State:CONTAINER_EXITED,CreatedAt:1722287044879919370,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cb093dff2f8f2763c3e735334f097b2f,},Annotations:map[string]string{io.kubernetes.container.hash: 7337c8d9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePa
th: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f,PodSandboxId:9cb53e7cd55a0a7ff284fcd8ce591f8ee1de2e746602203b38c6b16420311fc0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e,State:CONTAINER_EXITED,CreatedAt:1722287044746670279,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-913034,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3d7fd520617d6aa29f585dbbd93fd9e,},Annotations:map[string]string{io.kubernetes.container.hash: bcb4918f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationM
essagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3,PodSandboxId:ee962aa1517b668148481f7e46cd61d401dd925a73c6e51cc4f22388981bf4cf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1722287026247468987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7db6d8ff4d-djrln,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5526db23-d0f1-48ca-bd4e-d87981b47b51,},Annotations:map[string]string{io.kubernetes.container.hash: bc79eed5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns
-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=985e10cb-11e1-4e3c-8eda-5ab35cd72244 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	995939ce45b88       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   20 seconds ago       Running             kube-proxy                2                   64e486d764876       kube-proxy-45zxr
	7631b8b65edf4       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   25 seconds ago       Running             kube-controller-manager   2                   9cb53e7cd55a0       kube-controller-manager-pause-913034
	69a4081219038       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   25 seconds ago       Running             kube-scheduler            2                   6f1f3243ea910       kube-scheduler-pause-913034
	c0c4b3e419cc6       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   25 seconds ago       Running             etcd                      2                   45fccc44ee359       etcd-pause-913034
	722982b05e1b0       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   25 seconds ago       Running             kube-apiserver            2                   ae570803b0606       kube-apiserver-pause-913034
	643c95d4748db       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   49 seconds ago       Running             coredns                   1                   f1e53f7f6da2c       coredns-7db6d8ff4d-djrln
	cbabc0f984e92       55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1   49 seconds ago       Exited              kube-proxy                1                   64e486d764876       kube-proxy-45zxr
	5832856fcd232       1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d   49 seconds ago       Exited              kube-apiserver            1                   ae570803b0606       kube-apiserver-pause-913034
	7c973d38fecef       3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899   49 seconds ago       Exited              etcd                      1                   45fccc44ee359       etcd-pause-913034
	303544245d212       3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2   49 seconds ago       Exited              kube-scheduler            1                   6f1f3243ea910       kube-scheduler-pause-913034
	1eb0c4e3c63f2       76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e   49 seconds ago       Exited              kube-controller-manager   1                   9cb53e7cd55a0       kube-controller-manager-pause-913034
	768d8e818b193       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   About a minute ago   Exited              coredns                   0                   ee962aa1517b6       coredns-7db6d8ff4d-djrln
	
	
	==> coredns [643c95d4748dbf727a8d2b4772b800ef2f12b5c90d7cd3c733db684715f6a4eb] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59373 - 65319 "HINFO IN 3400262441109169624.3639415197000515649. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010977445s
	
	
	==> coredns [768d8e818b193afe9ad99106fd466249bf66720e49875fca9e86bf0753910ad3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-913034
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-913034
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4a82558daae672551cead5b9fbac04be65e51d2a
	                    minikube.k8s.io/name=pause-913034
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T21_03_31_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 21:03:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-913034
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 21:04:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 21:04:33 +0000   Mon, 29 Jul 2024 21:03:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.20
	  Hostname:    pause-913034
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 2ecd6e4afa4d4e4e957e3f245e06394c
	  System UUID:                2ecd6e4a-fa4d-4e4e-957e-3f245e06394c
	  Boot ID:                    c0ce05dd-122d-4244-a605-722abf0e3a9d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-djrln                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     70s
	  kube-system                 etcd-pause-913034                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         83s
	  kube-system                 kube-apiserver-pause-913034             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-pause-913034    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-proxy-45zxr                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-scheduler-pause-913034             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 68s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 45s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  89s (x8 over 90s)  kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x8 over 90s)  kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x7 over 90s)  kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    83s                kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  83s                kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     83s                kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeReady                82s                kubelet          Node pause-913034 status is now: NodeReady
	  Normal  RegisteredNode           71s                node-controller  Node pause-913034 event: Registered Node pause-913034 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-913034 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-913034 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-913034 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                 node-controller  Node pause-913034 event: Registered Node pause-913034 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.546111] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.069928] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.052719] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.153594] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.122840] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.286666] systemd-fstab-generator[659]: Ignoring "noauto" option for root device
	[  +4.167207] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +4.385304] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.078202] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.006061] systemd-fstab-generator[1275]: Ignoring "noauto" option for root device
	[  +0.071968] kauditd_printk_skb: 69 callbacks suppressed
	[ +13.799533] systemd-fstab-generator[1486]: Ignoring "noauto" option for root device
	[  +0.168409] kauditd_printk_skb: 21 callbacks suppressed
	[  +8.409878] kauditd_printk_skb: 89 callbacks suppressed
	[Jul29 21:04] systemd-fstab-generator[2355]: Ignoring "noauto" option for root device
	[  +0.158121] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +0.192305] systemd-fstab-generator[2381]: Ignoring "noauto" option for root device
	[  +0.124040] systemd-fstab-generator[2394]: Ignoring "noauto" option for root device
	[  +0.293754] systemd-fstab-generator[2422]: Ignoring "noauto" option for root device
	[  +0.950568] systemd-fstab-generator[2549]: Ignoring "noauto" option for root device
	[  +5.236193] kauditd_printk_skb: 195 callbacks suppressed
	[ +19.044637] systemd-fstab-generator[3393]: Ignoring "noauto" option for root device
	[  +5.833518] kauditd_printk_skb: 41 callbacks suppressed
	[ +15.004878] systemd-fstab-generator[3815]: Ignoring "noauto" option for root device
	
	
	==> etcd [7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09] <==
	{"level":"info","ts":"2024-07-29T21:04:05.750175Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T21:04:07.398553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398619Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec received MsgPreVoteResp from fc7eaa6ede108dec at term 2"}
	{"level":"info","ts":"2024-07-29T21:04:07.398677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became candidate at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec received MsgVoteResp from fc7eaa6ede108dec at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"fc7eaa6ede108dec became leader at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.398698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: fc7eaa6ede108dec elected leader fc7eaa6ede108dec at term 3"}
	{"level":"info","ts":"2024-07-29T21:04:07.400957Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"fc7eaa6ede108dec","local-member-attributes":"{Name:pause-913034 ClientURLs:[https://192.168.61.20:2379]}","request-path":"/0/members/fc7eaa6ede108dec/attributes","cluster-id":"d382f5c9d11d69f0","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T21:04:07.401122Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:04:07.40117Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T21:04:07.401543Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T21:04:07.401636Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T21:04:07.403382Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.20:2379"}
	{"level":"info","ts":"2024-07-29T21:04:07.403973Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T21:04:25.961252Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-29T21:04:25.96133Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"pause-913034","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.20:2380"],"advertise-client-urls":["https://192.168.61.20:2379"]}
	{"level":"warn","ts":"2024-07-29T21:04:25.961429Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.61.20:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961469Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.61.20:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961591Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-29T21:04:25.961604Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-29T21:04:25.963414Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"fc7eaa6ede108dec","current-leader-member-id":"fc7eaa6ede108dec"}
	{"level":"info","ts":"2024-07-29T21:04:25.968564Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.61.20:2380"}
	{"level":"info","ts":"2024-07-29T21:04:25.968841Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.61.20:2380"}
	{"level":"info","ts":"2024-07-29T21:04:25.96886Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"pause-913034","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.61.20:2380"],"advertise-client-urls":["https://192.168.61.20:2379"]}
	
	
	==> etcd [c0c4b3e419cc67d30db41ed67e8e145a6250dc87c8a2a491fc4cce2ceb49625f] <==
	{"level":"warn","ts":"2024-07-29T21:04:33.795942Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.267440147s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-913034\" ","response":"range_response_count:1 size:5653"}
	{"level":"info","ts":"2024-07-29T21:04:33.795991Z","caller":"traceutil/trace.go:171","msg":"trace[330872765] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-913034; range_end:; response_count:1; response_revision:481; }","duration":"1.267520748s","start":"2024-07-29T21:04:32.528461Z","end":"2024-07-29T21:04:33.795982Z","steps":["trace[330872765] 'agreement among raft nodes before linearized reading'  (duration: 1.267358646s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796022Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.528448Z","time spent":"1.267565233s","remote":"127.0.0.1:41342","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5677,"request content":"key:\"/registry/pods/kube-system/etcd-pause-913034\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.796357Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.4694Z","time spent":"1.326525931s","remote":"127.0.0.1:41428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-913034\" mod_revision:423 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-913034\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-913034\" > >"}
	{"level":"warn","ts":"2024-07-29T21:04:33.796469Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"420.725991ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-29T21:04:33.796523Z","caller":"traceutil/trace.go:171","msg":"trace[1348443076] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:481; }","duration":"420.831637ms","start":"2024-07-29T21:04:33.37568Z","end":"2024-07-29T21:04:33.796512Z","steps":["trace[1348443076] 'agreement among raft nodes before linearized reading'  (duration: 420.771161ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796547Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.375662Z","time spent":"420.879017ms","remote":"127.0.0.1:41136","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.796377Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.223699892s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:179"}
	{"level":"info","ts":"2024-07-29T21:04:33.796754Z","caller":"traceutil/trace.go:171","msg":"trace[1377317068] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:481; }","duration":"1.224090389s","start":"2024-07-29T21:04:32.572652Z","end":"2024-07-29T21:04:33.796742Z","steps":["trace[1377317068] 'agreement among raft nodes before linearized reading'  (duration: 1.223686919s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.796816Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.57264Z","time spent":"1.224165413s","remote":"127.0.0.1:41356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":203,"request content":"key:\"/registry/serviceaccounts/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-07-29T21:04:33.797014Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"557.408053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" ","response":"range_response_count:66 size:59397"}
	{"level":"info","ts":"2024-07-29T21:04:33.797059Z","caller":"traceutil/trace.go:171","msg":"trace[1913177609] range","detail":"{range_begin:/registry/clusterroles/; range_end:/registry/clusterroles0; response_count:66; response_revision:481; }","duration":"557.475682ms","start":"2024-07-29T21:04:33.239574Z","end":"2024-07-29T21:04:33.79705Z","steps":["trace[1913177609] 'agreement among raft nodes before linearized reading'  (duration: 557.097559ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.797084Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.239561Z","time spent":"557.516352ms","remote":"127.0.0.1:41504","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":66,"response size":59421,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" "}
	{"level":"info","ts":"2024-07-29T21:04:33.796488Z","caller":"traceutil/trace.go:171","msg":"trace[847856133] transaction","detail":"{read_only:false; number_of_response:0; response_revision:480; }","duration":"1.340101112s","start":"2024-07-29T21:04:32.456374Z","end":"2024-07-29T21:04:33.796475Z","steps":["trace[847856133] 'process raft request'  (duration: 951.343764ms)","trace[847856133] 'compare'  (duration: 387.125686ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T21:04:33.803508Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.456359Z","time spent":"1.347108548s","remote":"127.0.0.1:41332","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":29,"request content":"compare:<target:MOD key:\"/registry/minions/pause-913034\" mod_revision:0 > success:<request_put:<key:\"/registry/minions/pause-913034\" value_size:3854 >> failure:<>"}
	{"level":"warn","ts":"2024-07-29T21:04:33.797296Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"560.317766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-node-critical\" ","response":"range_response_count:1 size:442"}
	{"level":"warn","ts":"2024-07-29T21:04:33.797686Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.225010013s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2024-07-29T21:04:33.805376Z","caller":"traceutil/trace.go:171","msg":"trace[1560037222] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:481; }","duration":"1.232749938s","start":"2024-07-29T21:04:32.572609Z","end":"2024-07-29T21:04:33.805359Z","steps":["trace[1560037222] 'agreement among raft nodes before linearized reading'  (duration: 1.225046866s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.805641Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:32.572591Z","time spent":"1.233033941s","remote":"127.0.0.1:41356","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":209,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" "}
	{"level":"info","ts":"2024-07-29T21:04:33.806072Z","caller":"traceutil/trace.go:171","msg":"trace[23262103] range","detail":"{range_begin:/registry/priorityclasses/system-node-critical; range_end:; response_count:1; response_revision:481; }","duration":"569.112754ms","start":"2024-07-29T21:04:33.236948Z","end":"2024-07-29T21:04:33.806061Z","steps":["trace[23262103] 'agreement among raft nodes before linearized reading'  (duration: 560.31377ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-29T21:04:33.808289Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-29T21:04:33.23689Z","time spent":"571.382448ms","remote":"127.0.0.1:41528","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":466,"request content":"key:\"/registry/priorityclasses/system-node-critical\" "}
	{"level":"info","ts":"2024-07-29T21:04:34.028572Z","caller":"traceutil/trace.go:171","msg":"trace[2071302045] linearizableReadLoop","detail":"{readStateIndex:511; appliedIndex:510; }","duration":"147.208145ms","start":"2024-07-29T21:04:33.881344Z","end":"2024-07-29T21:04:34.028552Z","steps":["trace[2071302045] 'read index received'  (duration: 119.48094ms)","trace[2071302045] 'applied index is now lower than readState.Index'  (duration: 27.72651ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-29T21:04:34.028849Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.481416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:monitoring\" ","response":"range_response_count:1 size:634"}
	{"level":"info","ts":"2024-07-29T21:04:34.028955Z","caller":"traceutil/trace.go:171","msg":"trace[845426345] range","detail":"{range_begin:/registry/clusterroles/system:monitoring; range_end:; response_count:1; response_revision:486; }","duration":"147.603721ms","start":"2024-07-29T21:04:33.881341Z","end":"2024-07-29T21:04:34.028945Z","steps":["trace[845426345] 'agreement among raft nodes before linearized reading'  (duration: 147.354049ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-29T21:04:34.029336Z","caller":"traceutil/trace.go:171","msg":"trace[1413533679] transaction","detail":"{read_only:false; response_revision:486; number_of_response:1; }","duration":"148.867902ms","start":"2024-07-29T21:04:33.880454Z","end":"2024-07-29T21:04:34.029322Z","steps":["trace[1413533679] 'process raft request'  (duration: 120.424889ms)","trace[1413533679] 'compare'  (duration: 27.597986ms)"],"step_count":2}
	
	
	==> kernel <==
	 21:04:55 up 2 min,  0 users,  load average: 0.81, 0.35, 0.13
	Linux pause-913034 5.10.207 #1 SMP Tue Jul 23 04:25:44 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0] <==
	I0729 21:04:15.902983       1 controller.go:167] Shutting down OpenAPI controller
	I0729 21:04:15.902998       1 naming_controller.go:302] Shutting down NamingConditionController
	I0729 21:04:15.903011       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	I0729 21:04:15.903038       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0729 21:04:15.903064       1 controller.go:129] Ending legacy_token_tracking_controller
	I0729 21:04:15.903069       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0729 21:04:15.903082       1 apf_controller.go:386] Shutting down API Priority and Fairness config worker
	I0729 21:04:15.903096       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0729 21:04:15.903106       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0729 21:04:15.904019       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0729 21:04:15.904282       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:04:15.904646       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:04:15.904729       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0729 21:04:15.904766       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0729 21:04:15.904788       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0729 21:04:15.904824       1 controller.go:157] Shutting down quota evaluator
	I0729 21:04:15.904847       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.905119       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0729 21:04:15.905159       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0729 21:04:15.908327       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0729 21:04:15.908341       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908604       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0729 21:04:15.908737       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908842       1 controller.go:176] quota evaluator worker shutdown
	I0729 21:04:15.908847       1 controller.go:176] quota evaluator worker shutdown
	
	
	==> kube-apiserver [722982b05e1b0032900a080ea07ff83e51390f67ca696130409062239f21880a] <==
	I0729 21:04:33.805474       1 trace.go:236] Trace[977142255]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:28c1bfdd-aae0-479b-a8ad-858f61e245c9,client:127.0.0.1,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:cluster,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:LIST (29-Jul-2024 21:04:33.238) (total time: 566ms):
	Trace[977142255]: ["List(recursive=true) etcd3" audit-id:28c1bfdd-aae0-479b-a8ad-858f61e245c9,key:/clusterroles,resourceVersion:,resourceVersionMatch:,limit:0,continue: 566ms (21:04:33.239)]
	Trace[977142255]: [566.466149ms] [566.466149ms] END
	I0729 21:04:33.809652       1 trace.go:236] Trace[1360136650]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:a943e3ec-f126-4150-98ed-9b3079d4b7ca,client:127.0.0.1,api-group:scheduling.k8s.io,api-version:v1,name:system-node-critical,subresource:,namespace:,protocol:HTTP/2.0,resource:priorityclasses,scope:resource,url:/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical,user-agent:kube-apiserver/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:GET (29-Jul-2024 21:04:33.236) (total time: 573ms):
	Trace[1360136650]: ---"About to write a response" 573ms (21:04:33.809)
	Trace[1360136650]: [573.475211ms] [573.475211ms] END
	I0729 21:04:33.815784       1 trace.go:236] Trace[1203678473]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:9d41f055-c25f-4621-b3fb-64f29f533d35,client:192.168.61.20,api-group:,api-version:v1,name:coredns,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/coredns/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.571) (total time: 1244ms):
	Trace[1203678473]: ---"watchCache locked acquired" 1236ms (21:04:33.807)
	Trace[1203678473]: [1.244171453s] [1.244171453s] END
	I0729 21:04:33.817122       1 trace.go:236] Trace[659771155]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01ba8e6f-c87f-4c96-8e34-1fc37e0bf66b,client:192.168.61.20,api-group:,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.354) (total time: 1462ms):
	Trace[659771155]: ["Create etcd3" audit-id:01ba8e6f-c87f-4c96-8e34-1fc37e0bf66b,key:/minions/pause-913034,type:*core.Node,resource:nodes 1361ms (21:04:32.455)
	Trace[659771155]:  ---"Txn call succeeded" 1352ms (21:04:33.808)]
	Trace[659771155]: [1.462896691s] [1.462896691s] END
	I0729 21:04:33.821703       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0729 21:04:33.823510       1 trace.go:236] Trace[1842723628]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0ec625c2-2892-4962-b014-6cc8dc18ea0a,client:192.168.61.20,api-group:,api-version:v1,name:kube-proxy,subresource:token,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token,user-agent:kubelet/v1.30.3 (linux/amd64) kubernetes/6fc0a69,verb:POST (29-Jul-2024 21:04:32.571) (total time: 1251ms):
	Trace[1842723628]: ---"watchCache locked acquired" 1247ms (21:04:33.819)
	Trace[1842723628]: [1.251703138s] [1.251703138s] END
	W0729 21:04:34.493540       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.61.20]
	I0729 21:04:34.495136       1 controller.go:615] quota admission added evaluator for: endpoints
	I0729 21:04:34.502468       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 21:04:34.795322       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0729 21:04:34.810690       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0729 21:04:34.853325       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0729 21:04:34.884937       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 21:04:34.893509       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f] <==
	I0729 21:04:10.989068       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989166       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989268       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.989325       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0729 21:04:10.995066       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0729 21:04:10.995166       1 controllermanager.go:761] "Started controller" controller="node-lifecycle-controller"
	I0729 21:04:10.995482       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0729 21:04:10.995645       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0729 21:04:10.995675       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0729 21:04:11.009156       1 controllermanager.go:761] "Started controller" controller="clusterrole-aggregation-controller"
	I0729 21:04:11.009434       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0729 21:04:11.009706       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0729 21:04:11.012325       1 controllermanager.go:761] "Started controller" controller="persistentvolume-protection-controller"
	I0729 21:04:11.012534       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0729 21:04:11.013285       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0729 21:04:11.014818       1 controllermanager.go:761] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0729 21:04:11.014949       1 controllermanager.go:713] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0729 21:04:11.014934       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0729 21:04:11.015283       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0729 21:04:11.022359       1 shared_informer.go:320] Caches are synced for tokens
	W0729 21:04:21.018548       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:21.519920       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:22.521683       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	W0729 21:04:24.527196       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.61.20:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.61.20:8443: connect: connection refused
	E0729 21:04:24.527403       1 cidr_allocator.go:146] "Failed to list all nodes" err="Get \"https://192.168.61.20:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition" logger="node-ipam-controller"
	
	
	==> kube-controller-manager [7631b8b65edf437a665d09a48d9071a082d5303ec1b370d0019269c804d603ec] <==
	I0729 21:04:46.328173       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-913034"
	I0729 21:04:46.328294       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0729 21:04:46.332418       1 shared_informer.go:320] Caches are synced for ephemeral
	I0729 21:04:46.342682       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0729 21:04:46.345171       1 shared_informer.go:320] Caches are synced for GC
	I0729 21:04:46.346268       1 shared_informer.go:320] Caches are synced for job
	I0729 21:04:46.350736       1 shared_informer.go:320] Caches are synced for disruption
	I0729 21:04:46.360483       1 shared_informer.go:320] Caches are synced for cronjob
	I0729 21:04:46.360554       1 shared_informer.go:320] Caches are synced for deployment
	I0729 21:04:46.363866       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0729 21:04:46.365595       1 shared_informer.go:320] Caches are synced for attach detach
	I0729 21:04:46.378298       1 shared_informer.go:320] Caches are synced for stateful set
	I0729 21:04:46.387185       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0729 21:04:46.388477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0729 21:04:46.388517       1 shared_informer.go:320] Caches are synced for persistent volume
	I0729 21:04:46.396900       1 shared_informer.go:320] Caches are synced for HPA
	I0729 21:04:46.401384       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0729 21:04:46.411330       1 shared_informer.go:320] Caches are synced for endpoint
	I0729 21:04:46.411386       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0729 21:04:46.411497       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.133µs"
	I0729 21:04:46.416273       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 21:04:46.421669       1 shared_informer.go:320] Caches are synced for resource quota
	I0729 21:04:46.841509       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:04:46.861125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0729 21:04:46.861158       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [995939ce45b8807f3b903e65d133adca9cd15b9b1630bf9651c77791c38eee6f] <==
	I0729 21:04:34.263133       1 server_linux.go:69] "Using iptables proxy"
	I0729 21:04:34.275768       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.20"]
	I0729 21:04:34.330867       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 21:04:34.330936       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 21:04:34.330959       1 server_linux.go:165] "Using iptables Proxier"
	I0729 21:04:34.333937       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 21:04:34.334166       1 server.go:872] "Version info" version="v1.30.3"
	I0729 21:04:34.334518       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:34.336057       1 config.go:192] "Starting service config controller"
	I0729 21:04:34.336103       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 21:04:34.336136       1 config.go:101] "Starting endpoint slice config controller"
	I0729 21:04:34.336157       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 21:04:34.336821       1 config.go:319] "Starting node config controller"
	I0729 21:04:34.336856       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 21:04:34.436778       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 21:04:34.436939       1 shared_informer.go:320] Caches are synced for service config
	I0729 21:04:34.437033       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de] <==
	I0729 21:04:06.352489       1 server_linux.go:69] "Using iptables proxy"
	I0729 21:04:08.999689       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.61.20"]
	I0729 21:04:09.036168       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0729 21:04:09.036279       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0729 21:04:09.036301       1 server_linux.go:165] "Using iptables Proxier"
	I0729 21:04:09.038706       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 21:04:09.038911       1 server.go:872] "Version info" version="v1.30.3"
	I0729 21:04:09.038923       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:09.040050       1 config.go:192] "Starting service config controller"
	I0729 21:04:09.040066       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 21:04:09.040093       1 config.go:101] "Starting endpoint slice config controller"
	I0729 21:04:09.040097       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 21:04:09.040720       1 config.go:319] "Starting node config controller"
	I0729 21:04:09.040772       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 21:04:09.141128       1 shared_informer.go:320] Caches are synced for node config
	I0729 21:04:09.141170       1 shared_informer.go:320] Caches are synced for service config
	I0729 21:04:09.141240       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d] <==
	I0729 21:04:06.669057       1 serving.go:380] Generated self-signed cert in-memory
	W0729 21:04:08.929119       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 21:04:08.929378       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 21:04:08.929500       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:04:08.929595       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:04:08.977964       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 21:04:08.978951       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:08.983050       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 21:04:08.983947       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:04:08.988524       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:08.983963       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 21:04:09.089645       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:26.095820       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0729 21:04:26.096477       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [69a40812190387eb204df88a23bb56d11263725a8341b5492a1c5693482f4c05] <==
	I0729 21:04:29.553193       1 serving.go:380] Generated self-signed cert in-memory
	W0729 21:04:32.278042       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 21:04:32.278076       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 21:04:32.278133       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 21:04:32.278140       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 21:04:32.316648       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 21:04:32.316680       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 21:04:32.318112       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 21:04:32.320294       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 21:04:32.320367       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 21:04:32.320395       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0729 21:04:32.421286       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.788360    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3d7fd520617d6aa29f585dbbd93fd9e-ca-certs\") pod \"kube-controller-manager-pause-913034\" (UID: \"a3d7fd520617d6aa29f585dbbd93fd9e\") " pod="kube-system/kube-controller-manager-pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.788376    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb093dff2f8f2763c3e735334f097b2f-kubeconfig\") pod \"kube-scheduler-pause-913034\" (UID: \"cb093dff2f8f2763c3e735334f097b2f\") " pod="kube-system/kube-scheduler-pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.839657    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: E0729 21:04:28.840549    3400 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.20:8443: connect: connection refused" node="pause-913034"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.991240    3400 scope.go:117] "RemoveContainer" containerID="7c973d38feceff6cfe2a132b3fcabd20e359ae0051c4ee71d4332de088685c09"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.992792    3400 scope.go:117] "RemoveContainer" containerID="5832856fcd232fab3a660105d82da8acd8f636b0b0060a550cc26b80c9f0aad0"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.994043    3400 scope.go:117] "RemoveContainer" containerID="1eb0c4e3c63f2fea6c7396214fa18a62189dc75790a83042a636d74d989b5e7f"
	Jul 29 21:04:28 pause-913034 kubelet[3400]: I0729 21:04:28.996390    3400 scope.go:117] "RemoveContainer" containerID="303544245d2120299981d1b0508021cfaec29c22d849dae504c2f9faa8d12c6d"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.149648    3400 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-913034?timeout=10s\": dial tcp 192.168.61.20:8443: connect: connection refused" interval="800ms"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: I0729 21:04:29.243001    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.247855    3400 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.61.20:8443: connect: connection refused" node="pause-913034"
	Jul 29 21:04:29 pause-913034 kubelet[3400]: W0729 21:04:29.488515    3400 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.20:8443: connect: connection refused
	Jul 29 21:04:29 pause-913034 kubelet[3400]: E0729 21:04:29.488608    3400 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.61.20:8443: connect: connection refused
	Jul 29 21:04:30 pause-913034 kubelet[3400]: I0729 21:04:30.049981    3400 kubelet_node_status.go:73] "Attempting to register node" node="pause-913034"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.515345    3400 apiserver.go:52] "Watching apiserver"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.520095    3400 topology_manager.go:215] "Topology Admit Handler" podUID="62f09954-bceb-4a05-a703-00b80c49e9bc" podNamespace="kube-system" podName="kube-proxy-45zxr"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.520450    3400 topology_manager.go:215] "Topology Admit Handler" podUID="5526db23-d0f1-48ca-bd4e-d87981b47b51" podNamespace="kube-system" podName="coredns-7db6d8ff4d-djrln"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.544684    3400 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.570184    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62f09954-bceb-4a05-a703-00b80c49e9bc-lib-modules\") pod \"kube-proxy-45zxr\" (UID: \"62f09954-bceb-4a05-a703-00b80c49e9bc\") " pod="kube-system/kube-proxy-45zxr"
	Jul 29 21:04:32 pause-913034 kubelet[3400]: I0729 21:04:32.570438    3400 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62f09954-bceb-4a05-a703-00b80c49e9bc-xtables-lock\") pod \"kube-proxy-45zxr\" (UID: \"62f09954-bceb-4a05-a703-00b80c49e9bc\") " pod="kube-system/kube-proxy-45zxr"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.831110    3400 kubelet_node_status.go:112] "Node was previously registered" node="pause-913034"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.831816    3400 kubelet_node_status.go:76] "Successfully registered node" node="pause-913034"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.836898    3400 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 29 21:04:33 pause-913034 kubelet[3400]: I0729 21:04:33.838642    3400 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 29 21:04:34 pause-913034 kubelet[3400]: I0729 21:04:34.021816    3400 scope.go:117] "RemoveContainer" containerID="cbabc0f984e92717c5c092e306d03f330e6116ee14a528087d770d2eed1717de"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-913034 -n pause-913034
helpers_test.go:261: (dbg) Run:  kubectl --context pause-913034 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (64.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (7200.059s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-518643 --alsologtostderr -v=3
E0729 21:22:47.258268  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/calico-404553/client.crt: no such file or directory
E0729 21:23:06.038491  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/custom-flannel-404553/client.crt: no such file or directory
E0729 21:23:12.255516  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/enable-default-cni-404553/client.crt: no such file or directory
E0729 21:23:14.090623  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (23m38s)
	TestNetworkPlugins/group (12m52s)
	TestStartStop (20m7s)
	TestStartStop/group/default-k8s-diff-port (3m22s)
	TestStartStop/group/default-k8s-diff-port/serial (3m22s)
	TestStartStop/group/default-k8s-diff-port/serial/Stop (1m42s)
	TestStartStop/group/embed-certs (12m52s)
	TestStartStop/group/embed-certs/serial (12m52s)
	TestStartStop/group/embed-certs/serial/SecondStart (8m40s)
	TestStartStop/group/no-preload (13m1s)
	TestStartStop/group/no-preload/serial (13m1s)
	TestStartStop/group/no-preload/serial/SecondStart (8m31s)
	TestStartStop/group/old-k8s-version (14m36s)
	TestStartStop/group/old-k8s-version/serial (14m36s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (7m43s)

                                                
                                                
goroutine 3389 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00014fd40, 0xc000af9bb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc00012c780, {0x49d0120, 0x2b, 0x2b}, {0x26b5f62?, 0xc00090fb00?, 0x4a8ca60?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0007d2b40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0007d2b40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:133 +0x195

                                                
                                                
goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00052e280)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 67 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 66
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1818 [chan receive, 12 minutes]:
testing.(*testContext).waitParallel(0xc0006a9770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc001631040, 0xc0015a27f8)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1694
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2273 [chan receive, 13 minutes]:
testing.(*T).Run(0xc001891040, {0x265cb74?, 0x0?}, 0xc001a6ae00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001891040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001891040, 0xc00190c400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2269
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3185 [chan receive, 9 minutes]:
testing.(*T).Run(0xc00196c340, {0x26689b7?, 0x60400000004?}, 0xc0016cc100)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc00196c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc00196c340, 0xc0016cc080)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2275
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2363 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00193e0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2359
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2608 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b277a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3119 [chan receive, 9 minutes]:
testing.(*T).Run(0xc001891860, {0x26689b7?, 0x60400000004?}, 0xc001b36480)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001891860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001891860, 0xc001a6ae00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2273
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2997 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0016aced0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001883e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016acf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00182d3b0, {0x3695880, 0xc001cceb40}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00182d3b0, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2912
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2698 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2697
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2349 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00190c790, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000781e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00190c7c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b7e3a0, {0x3695880, 0xc000b121b0}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b7e3a0, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2458
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 427 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001882fc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 655 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c8db00, 0xc001bb9ce0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 357
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2342 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2341
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2269 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0018909c0, 0x313a300)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1742
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3301 [syscall, 9 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xc41aa, 0xc000b92ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001c8a630)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001c8a630)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001a08600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001a08600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001d0b1e0, 0xc001a08600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36b9740, 0xc0004208c0}, 0xc001d0b1e0, {0xc001d02d50, 0x11}, {0x0?, 0xc000bf6f60?}, {0x551133?, 0x4a170f?}, {0xc0015bc600, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001d0b1e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001d0b1e0, 0xc001b36480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3119
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3356 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000899140, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2457 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000781f20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 706 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc00021bc80, 0xc0007ea2a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 673
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3355 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0007814a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1694 [chan receive, 24 minutes]:
testing.(*T).Run(0xc0016301a0, {0x265b5c9?, 0x55127c?}, 0xc0015a27f8)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0016301a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc0016301a0, 0x313a0e0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3224 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc0013b6f50, 0xc000b85f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0xc0?, 0xc0013b6f50, 0xc0013b6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0x99b656?, 0xc001ec2d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013b6fd0?, 0x592e44?, 0xc001953880?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 209 [IO wait, 78 minutes]:
internal/poll.runtime_pollWait(0x7f1bc25e0cb0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xf?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc000638580)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc000638580)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0008282a0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008282a0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0005620f0, {0x36ac760, 0xc0008282a0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0005620f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc00081b040?, 0xc00081b040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 206
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 428 [chan receive, 76 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0000dd9c0, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 366
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 571 [chan send, 74 minutes]:
os/exec.(*Cmd).watchCtx(0xc0001fc480, 0xc000132ba0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 570
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3281 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f1bc25e08d0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0018f4d80?, 0xc001afc2db?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0018f4d80, {0xc001afc2db, 0x525, 0x525})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006aa300, {0xc001afc2db?, 0x5383e0?, 0x22f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00137cc90, {0x3694320, 0xc0013fe2d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc00137cc90}, {0x3694320, 0xc0013fe2d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006aa300?, {0x3694460, 0xc00137cc90})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0006aa300, {0x3694460, 0xc00137cc90})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc00137cc90}, {0x3694380, 0xc0006aa300}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016cc100?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3280
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 393 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0000dd990, 0x23)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001882ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0000dd9c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001479810, {0x3695880, 0xc000b9d560}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001479810, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 428
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2593 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc0013b4750, 0xc001a00f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0x10?, 0xc0013b4750, 0xc0013b4798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc001d0a4e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013b47d0?, 0x592e44?, 0xc001984510?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2609
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3349 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f1bc25e0208, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00165bb60?, 0xc001d05adf?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00165bb60, {0xc001d05adf, 0x521, 0x521})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b6eea8, {0xc001d05adf?, 0x21a3700?, 0x20a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001987500, {0x3694320, 0xc0019ce4b8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc001987500}, {0x3694320, 0xc0019ce4b8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b6eea8?, {0x3694460, 0xc001987500})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000b6eea8, {0x3694460, 0xc001987500})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc001987500}, {0x3694380, 0xc000b6eea8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001938580?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3348
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 394 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc000b94f50, 0xc000b94f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0x40?, 0xc000b94f50, 0xc000b94f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc00081ba00?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013b8fd0?, 0x592e44?, 0xc001409140?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 428
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2979 [chan receive, 14 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00091ce80, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 830 [select, 74 minutes]:
net/http.(*persistConn).writeLoop(0xc00193a7e0)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 827
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 2270 [chan receive, 14 minutes]:
testing.(*T).Run(0xc001890b60, {0x265cb74?, 0x0?}, 0xc0016cc280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001890b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001890b60, 0xc00190c340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2269
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 395 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 394
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 829 [select, 74 minutes]:
net/http.(*persistConn).readLoop(0xc00193a7e0)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 827
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 3315 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc00185c600, 0xc001bb8540)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3280
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2351 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2350
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3303 [IO wait]:
internal/poll.runtime_pollWait(0x7f1bc25e03f8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001db7c80?, 0xc00171d0cb?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001db7c80, {0xc00171d0cb, 0x1ef35, 0x1ef35})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0013fe348, {0xc00171d0cb?, 0x77?, 0x1fe33?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001430840, {0x3694320, 0xc0019ce358})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc001430840}, {0x3694320, 0xc0019ce358}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013fe348?, {0x3694460, 0xc001430840})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0013fe348, {0x3694460, 0xc001430840})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc001430840}, {0x3694380, 0xc0013fe348}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc00185d080?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3301
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3396 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f1bc25e07d8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d50a80?, 0xc001e08c34?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d50a80, {0xc001e08c34, 0x3cc, 0x3cc})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b6ed58, {0xc001e08c34?, 0x7ffe62c4a276?, 0x34?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014315c0, {0x3694320, 0xc0006aa2e8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc0014315c0}, {0x3694320, 0xc0006aa2e8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b6ed58?, {0x3694460, 0xc0014315c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000b6ed58, {0x3694460, 0xc0014315c0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc0014315c0}, {0x3694380, 0xc000b6ed58}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001938580?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3348 [syscall, 9 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xc4302, 0xc001a04ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001b600f0)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001b600f0)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001ec2480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001ec2480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001415040, 0xc001ec2480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36b9740, 0xc0004820e0}, 0xc001415040, {0xc00005ed98, 0x16}, {0x0?, 0xc001dfe760?}, {0x551133?, 0x4a170f?}, {0xc001ec3200, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001415040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001415040, 0xc001938580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2962
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3395 [syscall, 3 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xc4ca6, 0xc001629a90, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001c8b110)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001c8b110)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001a08d80)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001a08d80)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0014149c0, 0xc001a08d80)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStop({0x36b9740?, 0xc00047e1c0?}, 0xc0014149c0, {0xc00185e740?, 0x5518ce?}, {0x0?, 0xc001575760?}, {0x551133?, 0x4a170f?}, {0xc0015bc100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:228 +0x17b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0014149c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0014149c0, 0xc001938580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3363
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3361 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3360
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2998 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc000099f50, 0xc000099f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0xc0?, 0xc000099f50, 0xc000099f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc001d0a340?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000099fd0?, 0x592e44?, 0xc001c141e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2912
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3360 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc000bf6f50, 0xc000bf6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0x11?, 0xc000bf6f50, 0xc000bf6f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc001d0b1e0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000bf6fd0?, 0x592e44?, 0xc001b36480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2903 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00091ce50, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0018f46c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00091ce80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00182cf60, {0x3695880, 0xc001c14ab0}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00182cf60, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2978 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0018f47e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2961
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3304 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a08600, 0xc001fc51a0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3301
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2999 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2998
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2609 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001b48580, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2904 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc001dfb750, 0xc001dfb798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0x20?, 0xc001dfb750, 0xc001dfb798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc0018901a0?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00182cf30?, 0xc00184cbc0?, 0xc001dfb7a8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2979
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2696 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001952710, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0018f5ce0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001952740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b66f40, {0x3695880, 0xc001ccf5f0}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000b66f40, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2350 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc00050af50, 0xc00050af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0xc0?, 0xc00050af50, 0xc00050af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0x99b656?, 0xc00021b080?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00050afd0?, 0x592e44?, 0xc0002233f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2458
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1742 [chan receive, 21 minutes]:
testing.(*T).Run(0xc0007f4340, {0x265b5c9?, 0x551133?}, 0x313a300)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0007f4340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0007f4340, 0x313a128)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2458 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00190c7c0, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2456
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3397 [IO wait]:
internal/poll.runtime_pollWait(0x7f1bc25e09c8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001d50b40?, 0xc000bc217e?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001d50b40, {0xc000bc217e, 0x3e82, 0x3e82})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b6ed78, {0xc000bc217e?, 0x0?, 0x3e2d?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0014315f0, {0x3694320, 0xc0019ce348})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc0014315f0}, {0x3694320, 0xc0019ce348}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b6ed78?, {0x3694460, 0xc0014315f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000b6ed78, {0x3694460, 0xc0014315f0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc0014315f0}, {0x3694380, 0xc000b6ed78}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001f60420?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2675 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2674
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2592 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001b48550, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000b27680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001b48580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018da000, {0x3695880, 0xc001cce000}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018da000, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2609
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2341 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc001a6c750, 0xc001a07f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0xa0?, 0xc001a6c750, 0xc001a6c798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0xc001414340?, 0x551a60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x592de5?, 0xc0001a8600?, 0xc0015097a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2364
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3398 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a08d80, 0xc001408e40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3395
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3359 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000899110, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000781380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000899140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00006b080, {0x3695880, 0xc001472420}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00006b080, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3356
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2962 [chan receive, 9 minutes]:
testing.(*T).Run(0xc001d0a680, {0x26689b7?, 0x60400000004?}, 0xc001938580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001d0a680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001d0a680, 0xc0016cc280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2270
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2610 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2593
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2697 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc000bfc750, 0xc000bfc798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0xc0?, 0xc000bfc750, 0xc000bfc798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0x99b656?, 0xc00021b980?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000bfc7d0?, 0x592e44?, 0xc0000dcd80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2272 [chan receive, 3 minutes]:
testing.(*T).Run(0xc001890ea0, {0x265cb74?, 0x0?}, 0xc0016cc180)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001890ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001890ea0, 0xc00190c3c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2269
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2271 [chan receive, 21 minutes]:
testing.(*testContext).waitParallel(0xc0006a9770)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001890d00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001890d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001890d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001890d00, 0xc00190c380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2269
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3363 [chan receive, 3 minutes]:
testing.(*T).Run(0xc001d0a340, {0x265a774?, 0x60400000004?}, 0xc001938580)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001d0a340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001d0a340, 0xc0016cc180)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2272
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2275 [chan receive, 12 minutes]:
testing.(*T).Run(0xc001891380, {0x265cb74?, 0x0?}, 0xc0016cc080)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001891380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001891380, 0xc00190c540)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2269
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2723 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0018f5e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2522
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2636 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016ac140, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2666
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2724 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001952740, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2522
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2364 [chan receive, 18 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00091c8c0, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2359
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2340 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc00091c890, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001db7f20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00091c8c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001940010, {0x3695880, 0xc001430030}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001940010, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2364
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2905 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2904
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2674 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36b9900, 0xc000060cc0}, 0xc000bfdf50, 0xc000bfdf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x36b9900, 0xc000060cc0}, 0x40?, 0xc000bfdf50, 0xc000bfdf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36b9900?, 0xc000060cc0?}, 0x10000c0006a94f0?, 0xc0006a94f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000bfdfd0?, 0x592e44?, 0xc0001a8900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2636
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2912 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0016acf00, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2975
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2635 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001db7500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2666
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2625 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0016ac110, 0x13)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001db72c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0016ac140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001c51880, {0x3695880, 0xc0018b74d0}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001c51880, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2636
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3314 [IO wait]:
internal/poll.runtime_pollWait(0x7f1bc25e0018, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0018f4e40?, 0xc0025d9d88?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0018f4e40, {0xc0025d9d88, 0x1e278, 0x1e278})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006aa348, {0xc0025d9d88?, 0xc000507d30?, 0x1fe5c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00137ccf0, {0x3694320, 0xc0019ce298})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc00137ccf0}, {0x3694320, 0xc0019ce298}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0006aa348?, {0x3694460, 0xc00137ccf0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0006aa348, {0x3694460, 0xc00137ccf0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc00137ccf0}, {0x3694380, 0xc0006aa348}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001bb8360?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3280
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2911 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000780c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2975
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3280 [syscall, 9 minutes]:
syscall.Syscall6(0xf7, 0x1, 0xc412a, 0xc001447ab0, 0x1000004, 0x0, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:91 +0x39
os.(*Process).blockUntilWaitable(0xc001f96570)
	/usr/local/go/src/os/wait_waitid.go:32 +0x76
os.(*Process).wait(0xc001f96570)
	/usr/local/go/src/os/exec_unix.go:22 +0x25
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00185c600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00185c600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00196c4e0, 0xc00185c600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x36b9740, 0xc0000f6070}, 0xc00196c4e0, {0xc0017d2090, 0x12}, {0x0?, 0xc000bf6f60?}, {0x551133?, 0x4a170f?}, {0xc0007d4500, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc00196c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc00196c4e0, 0xc0016cc100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3185
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3302 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x7f1bc25e0ac0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001db7bc0?, 0xc001d042a6?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001db7bc0, {0xc001d042a6, 0x55a, 0x55a})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0013fe308, {0xc001d042a6?, 0x5383e0?, 0x22e?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001430810, {0x3694320, 0xc000b6ed48})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc001430810}, {0x3694320, 0xc000b6ed48}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0013fe308?, {0x3694460, 0xc001430810})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0013fe308, {0x3694460, 0xc001430810})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc001430810}, {0x3694380, 0xc0013fe308}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001b36480?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3301
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3223 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008996d0, 0x2)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x2148a40?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001d51e60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000899700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001afbc90, {0x3695880, 0xc001c3fef0}, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001afbc90, 0x3b9aca00, 0x0, 0x1, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3150
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3149 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000b26480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3219
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3150 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000899700, 0xc000060cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3219
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3351 [select, 9 minutes]:
os/exec.(*Cmd).watchCtx(0xc001ec2480, 0xc001f60a80)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3348
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3225 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3224
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3350 [IO wait]:
internal/poll.runtime_pollWait(0x7f1bc25e04f0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00165bc20?, 0xc001e5db70?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00165bc20, {0xc001e5db70, 0x30490, 0x30490})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000b6eed8, {0xc001e5db70?, 0xc001dffd30?, 0x3fe20?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0019875f0, {0x3694320, 0xc0013fe650})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3694460, 0xc0019875f0}, {0x3694320, 0xc0013fe650}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000b6eed8?, {0x3694460, 0xc0019875f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000b6eed8, {0x3694460, 0xc0019875f0})
	/usr/local/go/src/os/file.go:247 +0x9c
io.copyBuffer({0x3694460, 0xc0019875f0}, {0x3694380, 0xc000b6eed8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001fc4600?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3348
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (168/215)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 45.94
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 12.36
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.06
18 TestDownloadOnly/v1.30.3/DeleteAll 0.14
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.12
21 TestDownloadOnly/v1.31.0-beta.0/json-events 40.8
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.06
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.13
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 70.5
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
37 TestCertOptions 73.51
40 TestForceSystemdFlag 42.67
41 TestForceSystemdEnv 69.57
43 TestKVMDriverInstallOrUpdate 4.62
47 TestErrorSpam/setup 43.09
48 TestErrorSpam/start 0.34
49 TestErrorSpam/status 0.69
50 TestErrorSpam/pause 1.47
51 TestErrorSpam/unpause 1.52
52 TestErrorSpam/stop 4.39
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 52.36
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 42.2
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.6
64 TestFunctional/serial/CacheCmd/cache/add_local 2.08
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
72 TestFunctional/serial/ExtraConfig 33.71
73 TestFunctional/serial/ComponentHealth 0.07
74 TestFunctional/serial/LogsCmd 1.37
75 TestFunctional/serial/LogsFileCmd 1.4
76 TestFunctional/serial/InvalidService 4.34
78 TestFunctional/parallel/ConfigCmd 0.31
79 TestFunctional/parallel/DashboardCmd 19.04
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.16
82 TestFunctional/parallel/StatusCmd 0.78
86 TestFunctional/parallel/ServiceCmdConnect 12.62
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 36.3
90 TestFunctional/parallel/SSHCmd 0.44
91 TestFunctional/parallel/CpCmd 1.42
92 TestFunctional/parallel/MySQL 27.78
93 TestFunctional/parallel/FileSync 0.19
94 TestFunctional/parallel/CertSync 1.32
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.38
102 TestFunctional/parallel/License 0.57
103 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
107 TestFunctional/parallel/ProfileCmd/profile_list 0.28
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.76
116 TestFunctional/parallel/ServiceCmd/DeployApp 20.23
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.28
118 TestFunctional/parallel/ImageCommands/ImageListShort 1.12
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.93
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.56
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.44
123 TestFunctional/parallel/ImageCommands/Setup 1.75
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.26
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.87
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.06
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.18
131 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
132 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
133 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
134 TestFunctional/parallel/MountCmd/any-port 8.43
135 TestFunctional/parallel/ServiceCmd/List 0.91
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.84
137 TestFunctional/parallel/MountCmd/specific-port 1.89
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.3
139 TestFunctional/parallel/ServiceCmd/Format 0.31
140 TestFunctional/parallel/ServiceCmd/URL 0.29
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.23
142 TestFunctional/delete_echo-server_images 0.04
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestMultiControlPlane/serial/StartCluster 227.21
149 TestMultiControlPlane/serial/DeployApp 6.95
150 TestMultiControlPlane/serial/PingHostFromPods 1.2
151 TestMultiControlPlane/serial/AddWorkerNode 56.15
152 TestMultiControlPlane/serial/NodeLabels 0.07
153 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
154 TestMultiControlPlane/serial/CopyFile 12.65
156 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.47
158 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.4
160 TestMultiControlPlane/serial/DeleteSecondaryNode 17.14
161 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
163 TestMultiControlPlane/serial/RestartCluster 346.29
164 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.36
165 TestMultiControlPlane/serial/AddSecondaryNode 75.84
166 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.52
170 TestJSONOutput/start/Command 57.13
171 TestJSONOutput/start/Audit 0
173 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
174 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
176 TestJSONOutput/pause/Command 0.64
177 TestJSONOutput/pause/Audit 0
179 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/unpause/Command 0.59
183 TestJSONOutput/unpause/Audit 0
185 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/stop/Command 6.59
189 TestJSONOutput/stop/Audit 0
191 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
193 TestErrorJSONOutput 0.19
198 TestMainNoArgs 0.04
199 TestMinikubeProfile 83.6
202 TestMountStart/serial/StartWithMountFirst 27.8
203 TestMountStart/serial/VerifyMountFirst 0.38
204 TestMountStart/serial/StartWithMountSecond 23.57
205 TestMountStart/serial/VerifyMountSecond 0.37
206 TestMountStart/serial/DeleteFirst 0.69
207 TestMountStart/serial/VerifyMountPostDelete 0.37
208 TestMountStart/serial/Stop 1.27
209 TestMountStart/serial/RestartStopped 23.42
210 TestMountStart/serial/VerifyMountPostStop 0.38
213 TestMultiNode/serial/FreshStart2Nodes 118.66
214 TestMultiNode/serial/DeployApp2Nodes 5.13
215 TestMultiNode/serial/PingHostFrom2Pods 0.79
216 TestMultiNode/serial/AddNode 47.35
217 TestMultiNode/serial/MultiNodeLabels 0.06
218 TestMultiNode/serial/ProfileList 0.22
219 TestMultiNode/serial/CopyFile 7.14
220 TestMultiNode/serial/StopNode 2.27
221 TestMultiNode/serial/StartAfterStop 38.87
223 TestMultiNode/serial/DeleteNode 2.1
225 TestMultiNode/serial/RestartMultiNode 174.56
226 TestMultiNode/serial/ValidateNameConflict 44.26
233 TestScheduledStopUnix 110.61
237 TestRunningBinaryUpgrade 208.48
242 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
243 TestNoKubernetes/serial/StartWithK8s 98.94
255 TestNoKubernetes/serial/StartWithStopK8s 55.8
256 TestNoKubernetes/serial/Start 47.65
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 2.14
259 TestNoKubernetes/serial/Stop 1.29
260 TestNoKubernetes/serial/StartNoArgs 37.23
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.2
263 TestPause/serial/Start 80.51
271 TestStoppedBinaryUpgrade/Setup 2.29
272 TestStoppedBinaryUpgrade/Upgrade 98.1
274 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
x
+
TestDownloadOnly/v1.20.0/json-events (45.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-959018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-959018 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.940385197s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (45.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-959018
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-959018: exit status 85 (59.93935ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-959018 | jenkins | v1.33.1 | 29 Jul 24 19:23 UTC |          |
	|         | -p download-only-959018        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:23:15.909912  740974 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:23:15.910037  740974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:23:15.910045  740974 out.go:304] Setting ErrFile to fd 2...
	I0729 19:23:15.910049  740974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:23:15.910223  740974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	W0729 19:23:15.910358  740974 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19344-733808/.minikube/config/config.json: open /home/jenkins/minikube-integration/19344-733808/.minikube/config/config.json: no such file or directory
	I0729 19:23:15.910939  740974 out.go:298] Setting JSON to true
	I0729 19:23:15.911899  740974 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11143,"bootTime":1722269853,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:23:15.911963  740974 start.go:139] virtualization: kvm guest
	I0729 19:23:15.914839  740974 out.go:97] [download-only-959018] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0729 19:23:15.914978  740974 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 19:23:15.915026  740974 notify.go:220] Checking for updates...
	I0729 19:23:15.916684  740974 out.go:169] MINIKUBE_LOCATION=19344
	I0729 19:23:15.918349  740974 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:23:15.920012  740974 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 19:23:15.921737  740974 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:23:15.923404  740974 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 19:23:15.926509  740974 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 19:23:15.926762  740974 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:23:15.963190  740974 out.go:97] Using the kvm2 driver based on user configuration
	I0729 19:23:15.963242  740974 start.go:297] selected driver: kvm2
	I0729 19:23:15.963249  740974 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:23:15.963618  740974 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:23:15.963703  740974 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:23:15.979620  740974 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:23:15.979685  740974 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:23:15.980274  740974 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 19:23:15.980439  740974 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 19:23:15.980509  740974 cni.go:84] Creating CNI manager for ""
	I0729 19:23:15.980522  740974 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:23:15.980530  740974 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:23:15.980615  740974 start.go:340] cluster config:
	{Name:download-only-959018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-959018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:23:15.980792  740974 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:23:15.982925  740974 out.go:97] Downloading VM boot image ...
	I0729 19:23:15.982974  740974 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/iso/amd64/minikube-v1.33.1-1721690939-19319-amd64.iso
	I0729 19:23:24.782177  740974 out.go:97] Starting "download-only-959018" primary control-plane node in "download-only-959018" cluster
	I0729 19:23:24.782228  740974 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:23:24.881618  740974 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:23:24.881680  740974 cache.go:56] Caching tarball of preloaded images
	I0729 19:23:24.881867  740974 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:23:24.883734  740974 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 19:23:24.883765  740974 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:23:24.990403  740974 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:23:36.141481  740974 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:23:36.142331  740974 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:23:37.203720  740974 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0729 19:23:37.204099  740974 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/download-only-959018/config.json ...
	I0729 19:23:37.204130  740974 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/download-only-959018/config.json: {Name:mk02f630c5e77988519a8f93e3a47ff56870651b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:23:37.204343  740974 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0729 19:23:37.204548  740974 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-959018 host does not exist
	  To start a cluster, run: "minikube start -p download-only-959018"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-959018
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (12.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-634040 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-634040 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (12.357701324s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (12.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-634040
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-634040: exit status 85 (59.808565ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-959018 | jenkins | v1.33.1 | 29 Jul 24 19:23 UTC |                     |
	|         | -p download-only-959018        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| delete  | -p download-only-959018        | download-only-959018 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| start   | -o=json --download-only        | download-only-634040 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC |                     |
	|         | -p download-only-634040        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:24:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:24:02.184262  741293 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:24:02.184532  741293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:02.184542  741293 out.go:304] Setting ErrFile to fd 2...
	I0729 19:24:02.184547  741293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:02.184714  741293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 19:24:02.185308  741293 out.go:298] Setting JSON to true
	I0729 19:24:02.186244  741293 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11189,"bootTime":1722269853,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:24:02.186337  741293 start.go:139] virtualization: kvm guest
	I0729 19:24:02.188466  741293 out.go:97] [download-only-634040] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:24:02.188668  741293 notify.go:220] Checking for updates...
	I0729 19:24:02.189869  741293 out.go:169] MINIKUBE_LOCATION=19344
	I0729 19:24:02.191081  741293 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:24:02.192335  741293 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 19:24:02.193536  741293 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:24:02.194884  741293 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 19:24:02.197148  741293 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 19:24:02.197417  741293 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:24:02.230092  741293 out.go:97] Using the kvm2 driver based on user configuration
	I0729 19:24:02.230122  741293 start.go:297] selected driver: kvm2
	I0729 19:24:02.230128  741293 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:24:02.230457  741293 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:02.230534  741293 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:24:02.246552  741293 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:24:02.246613  741293 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:24:02.247115  741293 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 19:24:02.247287  741293 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 19:24:02.247322  741293 cni.go:84] Creating CNI manager for ""
	I0729 19:24:02.247329  741293 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:24:02.247339  741293 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:24:02.247416  741293 start.go:340] cluster config:
	{Name:download-only-634040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-634040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:24:02.247552  741293 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:02.249218  741293 out.go:97] Starting "download-only-634040" primary control-plane node in "download-only-634040" cluster
	I0729 19:24:02.249239  741293 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:24:02.761138  741293 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	I0729 19:24:02.761185  741293 cache.go:56] Caching tarball of preloaded images
	I0729 19:24:02.761397  741293 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime crio
	I0729 19:24:02.763155  741293 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 19:24:02.763190  741293 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:24:02.862820  741293 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:15191286f02471d9b3ea0b587fcafc39 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-634040 host does not exist
	  To start a cluster, run: "minikube start -p download-only-634040"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-634040
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (40.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-494598 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-494598 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (40.795029845s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (40.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-494598
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-494598: exit status 85 (62.266635ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-959018 | jenkins | v1.33.1 | 29 Jul 24 19:23 UTC |                     |
	|         | -p download-only-959018             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| delete  | -p download-only-959018             | download-only-959018 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| start   | -o=json --download-only             | download-only-634040 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC |                     |
	|         | -p download-only-634040             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| delete  | -p download-only-634040             | download-only-634040 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC | 29 Jul 24 19:24 UTC |
	| start   | -o=json --download-only             | download-only-494598 | jenkins | v1.33.1 | 29 Jul 24 19:24 UTC |                     |
	|         | -p download-only-494598             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|         | --driver=kvm2                       |                      |         |         |                     |                     |
	|         | --container-runtime=crio            |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 19:24:14
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 19:24:14.857154  741498 out.go:291] Setting OutFile to fd 1 ...
	I0729 19:24:14.857429  741498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:14.857439  741498 out.go:304] Setting ErrFile to fd 2...
	I0729 19:24:14.857443  741498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 19:24:14.857613  741498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 19:24:14.858169  741498 out.go:298] Setting JSON to true
	I0729 19:24:14.859236  741498 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":11202,"bootTime":1722269853,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 19:24:14.859302  741498 start.go:139] virtualization: kvm guest
	I0729 19:24:14.861545  741498 out.go:97] [download-only-494598] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 19:24:14.861727  741498 notify.go:220] Checking for updates...
	I0729 19:24:14.863040  741498 out.go:169] MINIKUBE_LOCATION=19344
	I0729 19:24:14.864459  741498 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 19:24:14.865901  741498 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 19:24:14.867532  741498 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 19:24:14.868917  741498 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0729 19:24:14.871199  741498 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 19:24:14.871453  741498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 19:24:14.905305  741498 out.go:97] Using the kvm2 driver based on user configuration
	I0729 19:24:14.905340  741498 start.go:297] selected driver: kvm2
	I0729 19:24:14.905348  741498 start.go:901] validating driver "kvm2" against <nil>
	I0729 19:24:14.905924  741498 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:14.906035  741498 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19344-733808/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0729 19:24:14.921600  741498 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0729 19:24:14.921661  741498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 19:24:14.922361  741498 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0729 19:24:14.922566  741498 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 19:24:14.922599  741498 cni.go:84] Creating CNI manager for ""
	I0729 19:24:14.922611  741498 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0729 19:24:14.922623  741498 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0729 19:24:14.922697  741498 start.go:340] cluster config:
	{Name:download-only-494598 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-494598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 19:24:14.922823  741498 iso.go:125] acquiring lock: {Name:mk8a01df365beefb5e7e0fab8b8cbeefd69e2460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 19:24:14.924553  741498 out.go:97] Starting "download-only-494598" primary control-plane node in "download-only-494598" cluster
	I0729 19:24:14.924572  741498 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:24:15.434973  741498 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:24:15.435055  741498 cache.go:56] Caching tarball of preloaded images
	I0729 19:24:15.435310  741498 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:24:15.437098  741498 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 19:24:15.437118  741498 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:24:15.536291  741498 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:3743f5ddb63994a661f14e5a8d3af98c -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4
	I0729 19:24:24.900912  741498 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:24:24.901849  741498 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19344-733808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-cri-o-overlay-amd64.tar.lz4 ...
	I0729 19:24:25.660464  741498 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on crio
	I0729 19:24:25.660858  741498 profile.go:143] Saving config to /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/download-only-494598/config.json ...
	I0729 19:24:25.660893  741498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/download-only-494598/config.json: {Name:mkec3e286ac0f8caac85c906a68c2ae478c2e268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 19:24:25.661061  741498 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime crio
	I0729 19:24:25.661227  741498 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19344-733808/.minikube/cache/linux/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-494598 host does not exist
	  To start a cluster, run: "minikube start -p download-only-494598"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-494598
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-653636 --alsologtostderr --binary-mirror http://127.0.0.1:37543 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-653636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-653636
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (70.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-106827 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-106827 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.723907338s)
helpers_test.go:175: Cleaning up "offline-crio-106827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-106827
--- PASS: TestOffline (70.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-416933
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-416933: exit status 85 (48.499269ms)

                                                
                                                
-- stdout --
	* Profile "addons-416933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-416933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-416933
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-416933: exit status 85 (47.859011ms)

                                                
                                                
-- stdout --
	* Profile "addons-416933" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-416933"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestCertOptions (73.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-768831 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-768831 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m12.285711389s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-768831 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-768831 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-768831 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-768831" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-768831
--- PASS: TestCertOptions (73.51s)

                                                
                                    
x
+
TestForceSystemdFlag (42.67s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-832067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-832067 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (41.684882212s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-832067 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-832067" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-832067
--- PASS: TestForceSystemdFlag (42.67s)

                                                
                                    
x
+
TestForceSystemdEnv (69.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-756036 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-756036 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m8.54161037s)
helpers_test.go:175: Cleaning up "force-systemd-env-756036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-756036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-756036: (1.031092099s)
--- PASS: TestForceSystemdEnv (69.57s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                    
x
+
TestErrorSpam/setup (43.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-312042 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-312042 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-312042 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-312042 --driver=kvm2  --container-runtime=crio: (43.093282382s)
--- PASS: TestErrorSpam/setup (43.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 status
--- PASS: TestErrorSpam/status (0.69s)

                                                
                                    
x
+
TestErrorSpam/pause (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 pause
--- PASS: TestErrorSpam/pause (1.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 unpause
--- PASS: TestErrorSpam/unpause (1.52s)

                                                
                                    
x
+
TestErrorSpam/stop (4.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop: (1.461415662s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop: (1.634814024s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-312042 --log_dir /tmp/nospam-312042 stop: (1.296622921s)
--- PASS: TestErrorSpam/stop (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19344-733808/.minikube/files/etc/test/nested/copy/740962/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-483711 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.356724391s)
--- PASS: TestFunctional/serial/StartWithProxy (52.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.2s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-483711 --alsologtostderr -v=8: (42.198599996s)
functional_test.go:659: soft start took 42.199450643s for "functional-483711" cluster.
--- PASS: TestFunctional/serial/SoftStart (42.20s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-483711 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:3.1: (1.183201353s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:3.3: (1.244723473s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 cache add registry.k8s.io/pause:latest: (1.17121033s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-483711 /tmp/TestFunctionalserialCacheCmdcacheadd_local2096604547/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache add minikube-local-cache-test:functional-483711
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 cache add minikube-local-cache-test:functional-483711: (1.756831627s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache delete minikube-local-cache-test:functional-483711
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-483711
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (206.308039ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 kubectl -- --context functional-483711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-483711 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-483711 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.706681609s)
functional_test.go:757: restart took 33.706818019s for "functional-483711" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-483711 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 logs: (1.370612057s)
--- PASS: TestFunctional/serial/LogsCmd (1.37s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 logs --file /tmp/TestFunctionalserialLogsFileCmd1971156984/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 logs --file /tmp/TestFunctionalserialLogsFileCmd1971156984/001/logs.txt: (1.400194108s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-483711 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-483711
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-483711: exit status 115 (265.880937ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.12:30391 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-483711 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 config get cpus: exit status 14 (49.389519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 config get cpus: exit status 14 (43.994203ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (19.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-483711 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-483711 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 754211: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (19.04s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-483711 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (139.114404ms)

                                                
                                                
-- stdout --
	* [functional-483711] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:08:29.032179  754103 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:08:29.032291  754103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:08:29.032299  754103 out.go:304] Setting ErrFile to fd 2...
	I0729 20:08:29.032304  754103 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:08:29.032517  754103 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:08:29.033060  754103 out.go:298] Setting JSON to false
	I0729 20:08:29.034083  754103 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13856,"bootTime":1722269853,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:08:29.034142  754103 start.go:139] virtualization: kvm guest
	I0729 20:08:29.036070  754103 out.go:177] * [functional-483711] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0729 20:08:29.037527  754103 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:08:29.037531  754103 notify.go:220] Checking for updates...
	I0729 20:08:29.039049  754103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:08:29.040420  754103 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:08:29.041787  754103 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:08:29.043159  754103 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:08:29.044873  754103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:08:29.047087  754103 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:08:29.047719  754103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:08:29.047801  754103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:08:29.064892  754103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41823
	I0729 20:08:29.065398  754103 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:08:29.065952  754103 main.go:141] libmachine: Using API Version  1
	I0729 20:08:29.065976  754103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:08:29.066291  754103 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:08:29.066488  754103 main.go:141] libmachine: (functional-483711) Calling .DriverName
	I0729 20:08:29.066766  754103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:08:29.067065  754103 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:08:29.067108  754103 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:08:29.083312  754103 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40659
	I0729 20:08:29.083783  754103 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:08:29.084254  754103 main.go:141] libmachine: Using API Version  1
	I0729 20:08:29.084279  754103 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:08:29.084727  754103 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:08:29.084908  754103 main.go:141] libmachine: (functional-483711) Calling .DriverName
	I0729 20:08:29.119637  754103 out.go:177] * Using the kvm2 driver based on existing profile
	I0729 20:08:29.120884  754103 start.go:297] selected driver: kvm2
	I0729 20:08:29.120900  754103 start.go:901] validating driver "kvm2" against &{Name:functional-483711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-483711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:08:29.121017  754103 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:08:29.123134  754103 out.go:177] 
	W0729 20:08:29.124560  754103 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 20:08:29.125770  754103 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-483711 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-483711 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.168349ms)

                                                
                                                
-- stdout --
	* [functional-483711] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:08:27.064609  753690 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:08:27.064752  753690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:08:27.064764  753690 out.go:304] Setting ErrFile to fd 2...
	I0729 20:08:27.064771  753690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:08:27.065204  753690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:08:27.065950  753690 out.go:298] Setting JSON to false
	I0729 20:08:27.067365  753690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":13854,"bootTime":1722269853,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0729 20:08:27.067451  753690 start.go:139] virtualization: kvm guest
	I0729 20:08:27.069685  753690 out.go:177] * [functional-483711] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0729 20:08:27.071426  753690 out.go:177]   - MINIKUBE_LOCATION=19344
	I0729 20:08:27.071475  753690 notify.go:220] Checking for updates...
	I0729 20:08:27.073720  753690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 20:08:27.074910  753690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	I0729 20:08:27.076192  753690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	I0729 20:08:27.077401  753690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0729 20:08:27.078758  753690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 20:08:27.080016  753690 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:08:27.080416  753690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:08:27.080464  753690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:08:27.098067  753690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35557
	I0729 20:08:27.098711  753690 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:08:27.099505  753690 main.go:141] libmachine: Using API Version  1
	I0729 20:08:27.099529  753690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:08:27.100005  753690 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:08:27.100284  753690 main.go:141] libmachine: (functional-483711) Calling .DriverName
	I0729 20:08:27.100577  753690 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 20:08:27.100898  753690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:08:27.100932  753690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:08:27.116428  753690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40711
	I0729 20:08:27.116919  753690 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:08:27.117493  753690 main.go:141] libmachine: Using API Version  1
	I0729 20:08:27.117530  753690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:08:27.117883  753690 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:08:27.118109  753690 main.go:141] libmachine: (functional-483711) Calling .DriverName
	I0729 20:08:27.155926  753690 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0729 20:08:27.157526  753690 start.go:297] selected driver: kvm2
	I0729 20:08:27.157545  753690 start.go:901] validating driver "kvm2" against &{Name:functional-483711 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19319/minikube-v1.33.1-1721690939-19319-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:functional-483711 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.12 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 20:08:27.157685  753690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 20:08:27.159653  753690 out.go:177] 
	W0729 20:08:27.160999  753690 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 20:08:27.162178  753690 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-483711 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-483711 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-mlxb4" [83550ce4-c70b-414d-a4db-17d1f635b6c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-mlxb4" [83550ce4-c70b-414d-a4db-17d1f635b6c6] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.004186246s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.12:31766
functional_test.go:1671: http://192.168.39.12:31766: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-mlxb4

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.12:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.12:31766
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cb4386ba-bc02-42d1-ac14-e07d20035bd8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004163601s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-483711 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-483711 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-483711 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-483711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7d69b27b-615f-4581-a7e9-d31f4616fdba] Pending
helpers_test.go:344: "sp-pod" [7d69b27b-615f-4581-a7e9-d31f4616fdba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7d69b27b-615f-4581-a7e9-d31f4616fdba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005279802s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-483711 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-483711 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-483711 delete -f testdata/storage-provisioner/pod.yaml: (1.514171713s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-483711 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [eb62f58f-1c19-4fd4-86fd-a84841c812a0] Pending
helpers_test.go:344: "sp-pod" [eb62f58f-1c19-4fd4-86fd-a84841c812a0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [eb62f58f-1c19-4fd4-86fd-a84841c812a0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005194774s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-483711 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh -n functional-483711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cp functional-483711:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1137656818/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh -n functional-483711 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh -n functional-483711 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-483711 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-pl752" [4aac191a-4342-48fa-bb83-b7b9bb0a75ab] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-pl752" [4aac191a-4342-48fa-bb83-b7b9bb0a75ab] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.019725206s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-483711 exec mysql-64454c8b5c-pl752 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-483711 exec mysql-64454c8b5c-pl752 -- mysql -ppassword -e "show databases;": exit status 1 (144.533594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-483711 exec mysql-64454c8b5c-pl752 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-483711 exec mysql-64454c8b5c-pl752 -- mysql -ppassword -e "show databases;": exit status 1 (133.08485ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-483711 exec mysql-64454c8b5c-pl752 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/740962/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /etc/test/nested/copy/740962/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/740962.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /etc/ssl/certs/740962.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/740962.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /usr/share/ca-certificates/740962.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/7409622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /etc/ssl/certs/7409622.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/7409622.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /usr/share/ca-certificates/7409622.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-483711 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "sudo systemctl is-active docker": exit status 1 (189.632939ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "sudo systemctl is-active containerd": exit status 1 (187.792056ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "228.850817ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "53.478132ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-483711 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-483711 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-d2m7x" [8c02d6fe-e16b-4f58-90e7-aca93cd91423] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-d2m7x" [8c02d6fe-e16b-4f58-90e7-aca93cd91423] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.006429911s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "237.172239ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "45.864832ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 image ls --format short --alsologtostderr: (1.124015929s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483711 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
localhost/minikube-local-cache-test:functional-483711
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-483711
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483711 image ls --format short --alsologtostderr:
I0729 20:08:41.322759  755171 out.go:291] Setting OutFile to fd 1 ...
I0729 20:08:41.322902  755171 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:41.322914  755171 out.go:304] Setting ErrFile to fd 2...
I0729 20:08:41.322920  755171 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:41.323167  755171 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
I0729 20:08:41.324884  755171 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:41.325040  755171 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:41.325417  755171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:41.325453  755171 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:41.341391  755171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
I0729 20:08:41.341896  755171 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:41.342489  755171 main.go:141] libmachine: Using API Version  1
I0729 20:08:41.342515  755171 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:41.342851  755171 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:41.343068  755171 main.go:141] libmachine: (functional-483711) Calling .GetState
I0729 20:08:41.345011  755171 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:41.345056  755171 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:41.360271  755171 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36737
I0729 20:08:41.360702  755171 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:41.361153  755171 main.go:141] libmachine: Using API Version  1
I0729 20:08:41.361178  755171 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:41.361492  755171 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:41.361659  755171 main.go:141] libmachine: (functional-483711) Calling .DriverName
I0729 20:08:41.361937  755171 ssh_runner.go:195] Run: systemctl --version
I0729 20:08:41.361969  755171 main.go:141] libmachine: (functional-483711) Calling .GetSSHHostname
I0729 20:08:41.364910  755171 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:41.365282  755171 main.go:141] libmachine: (functional-483711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:5b:f7", ip: ""} in network mk-functional-483711: {Iface:virbr1 ExpiryTime:2024-07-29 21:06:04 +0000 UTC Type:0 Mac:52:54:00:74:5b:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-483711 Clientid:01:52:54:00:74:5b:f7}
I0729 20:08:41.365311  755171 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:41.365443  755171 main.go:141] libmachine: (functional-483711) Calling .GetSSHPort
I0729 20:08:41.365643  755171 main.go:141] libmachine: (functional-483711) Calling .GetSSHKeyPath
I0729 20:08:41.365801  755171 main.go:141] libmachine: (functional-483711) Calling .GetSSHUsername
I0729 20:08:41.365995  755171 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/functional-483711/id_rsa Username:docker}
I0729 20:08:41.473696  755171 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 20:08:42.400434  755171 main.go:141] libmachine: Making call to close driver server
I0729 20:08:42.400464  755171 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:42.400780  755171 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:42.400888  755171 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:42.400916  755171 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 20:08:42.400926  755171 main.go:141] libmachine: Making call to close driver server
I0729 20:08:42.400934  755171 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:42.401150  755171 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:42.401164  755171 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483711 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| localhost/my-image                      | functional-483711  | 60ac31af3ae1d | 1.47MB |
| registry.k8s.io/coredns/coredns         | v1.11.1            | cbb01a7bd410d | 61.2MB |
| registry.k8s.io/etcd                    | 3.5.12-0           | 3861cfcd7c04c | 151MB  |
| registry.k8s.io/kube-proxy              | v1.30.3            | 55bb025d2cfa5 | 86MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20240715-585640e9 | 5cc3abe5717db | 87.2MB |
| docker.io/library/nginx                 | latest             | a72860cb95fd5 | 192MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.30.3            | 76932a3b37d7e | 112MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kicbase/echo-server           | functional-483711  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-483711  | 4bdcda058114e | 3.33kB |
| registry.k8s.io/kube-apiserver          | v1.30.3            | 1f6d574d502f3 | 118MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.30.3            | 3edc18e7b7672 | 63.1MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483711 image ls --format table --alsologtostderr:
I0729 20:08:46.705761  755342 out.go:291] Setting OutFile to fd 1 ...
I0729 20:08:46.705897  755342 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:46.705909  755342 out.go:304] Setting ErrFile to fd 2...
I0729 20:08:46.705914  755342 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:46.706218  755342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
I0729 20:08:46.706998  755342 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:46.707164  755342 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:46.707734  755342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:46.707791  755342 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:46.723867  755342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33041
I0729 20:08:46.724341  755342 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:46.725091  755342 main.go:141] libmachine: Using API Version  1
I0729 20:08:46.725128  755342 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:46.725492  755342 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:46.725731  755342 main.go:141] libmachine: (functional-483711) Calling .GetState
I0729 20:08:46.727680  755342 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:46.727723  755342 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:46.744334  755342 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43061
I0729 20:08:46.744737  755342 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:46.745242  755342 main.go:141] libmachine: Using API Version  1
I0729 20:08:46.745267  755342 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:46.745628  755342 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:46.745880  755342 main.go:141] libmachine: (functional-483711) Calling .DriverName
I0729 20:08:46.746098  755342 ssh_runner.go:195] Run: systemctl --version
I0729 20:08:46.746120  755342 main.go:141] libmachine: (functional-483711) Calling .GetSSHHostname
I0729 20:08:46.749122  755342 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:46.749681  755342 main.go:141] libmachine: (functional-483711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:5b:f7", ip: ""} in network mk-functional-483711: {Iface:virbr1 ExpiryTime:2024-07-29 21:06:04 +0000 UTC Type:0 Mac:52:54:00:74:5b:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-483711 Clientid:01:52:54:00:74:5b:f7}
I0729 20:08:46.749717  755342 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:46.749854  755342 main.go:141] libmachine: (functional-483711) Calling .GetSSHPort
I0729 20:08:46.750064  755342 main.go:141] libmachine: (functional-483711) Calling .GetSSHKeyPath
I0729 20:08:46.750246  755342 main.go:141] libmachine: (functional-483711) Calling .GetSSHUsername
I0729 20:08:46.750556  755342 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/functional-483711/id_rsa Username:docker}
I0729 20:08:46.863459  755342 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 20:08:47.575772  755342 main.go:141] libmachine: Making call to close driver server
I0729 20:08:47.575798  755342 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:47.576112  755342 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:47.576129  755342 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 20:08:47.576142  755342 main.go:141] libmachine: Making call to close driver server
I0729 20:08:47.576150  755342 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:47.576163  755342 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:47.576346  755342 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:47.576358  755342 main.go:141] libmachine: Making call to close connection to plugin binary
2024/07/29 20:08:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483711 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:functional-483711"],"size":"4943877"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"3861cfcd7c04ccac1f062788eca394
87248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"150779692"},{"id":"60ac31af3ae1d92b6bb65011481d2f4e745f4d0c1eb4270031ecec85e0583c9a","repoDigests":["localhost/my-image@sha256:6333747384d373ed90292e93ef0ba3a19b1bc7b62fa5759ea4e02440da41ff75"],"repoTags":["localhost/my-image:functional-483711"],"size":"1468600"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1","registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"61245718"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":["r
egistry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"85953945"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"134f31c160fd89ea780c2a86ec7d371765d925c324b5e6243d0d5d409bd04f20","repoDigests":["docker.io/library/48850c1290efb1eb7f268ee182aa136429d14895b2c52a880d6a5cae
db0e9078-tmp@sha256:2a1b13415f89e1154c6d8e528a639acc79560788f6df655dbf61e0bc3b14052d"],"repoTags":[],"size":"1466017"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c0
5ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7","registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"112198984"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"rep
oTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f","repoDigests":["docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115","docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"87165492"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c","docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc"],"repoTags":["docker.io/library/nginx:latest"],"size":"191750286"},{"id":"4bdcda058114ec939cc4ebddfec6d80345d10cda6f804d7deb83fe889f9bb971","repoDigests":["localhost/minikube-local-cache-test@sha256:0e45fd7d57e2b0aedcefb08a106a73e812484fae0b14c7a7dd34a501ed795b6c"],"repoTags":["localhost/minikube-local-cache-test:
functional-483711"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c","registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117609954"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"
63051080"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483711 image ls --format json --alsologtostderr:
I0729 20:08:46.145204  755318 out.go:291] Setting OutFile to fd 1 ...
I0729 20:08:46.145776  755318 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:46.145841  755318 out.go:304] Setting ErrFile to fd 2...
I0729 20:08:46.145861  755318 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:46.146413  755318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
I0729 20:08:46.147647  755318 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:46.147837  755318 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:46.148490  755318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:46.148550  755318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:46.163907  755318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41777
I0729 20:08:46.164406  755318 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:46.164984  755318 main.go:141] libmachine: Using API Version  1
I0729 20:08:46.165011  755318 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:46.165433  755318 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:46.165637  755318 main.go:141] libmachine: (functional-483711) Calling .GetState
I0729 20:08:46.167500  755318 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:46.167537  755318 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:46.184060  755318 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38239
I0729 20:08:46.184508  755318 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:46.184926  755318 main.go:141] libmachine: Using API Version  1
I0729 20:08:46.184951  755318 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:46.185261  755318 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:46.185445  755318 main.go:141] libmachine: (functional-483711) Calling .DriverName
I0729 20:08:46.185662  755318 ssh_runner.go:195] Run: systemctl --version
I0729 20:08:46.185685  755318 main.go:141] libmachine: (functional-483711) Calling .GetSSHHostname
I0729 20:08:46.188891  755318 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:46.189399  755318 main.go:141] libmachine: (functional-483711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:5b:f7", ip: ""} in network mk-functional-483711: {Iface:virbr1 ExpiryTime:2024-07-29 21:06:04 +0000 UTC Type:0 Mac:52:54:00:74:5b:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-483711 Clientid:01:52:54:00:74:5b:f7}
I0729 20:08:46.189429  755318 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:46.189567  755318 main.go:141] libmachine: (functional-483711) Calling .GetSSHPort
I0729 20:08:46.189775  755318 main.go:141] libmachine: (functional-483711) Calling .GetSSHKeyPath
I0729 20:08:46.189948  755318 main.go:141] libmachine: (functional-483711) Calling .GetSSHUsername
I0729 20:08:46.190109  755318 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/functional-483711/id_rsa Username:docker}
I0729 20:08:46.310363  755318 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 20:08:46.643918  755318 main.go:141] libmachine: Making call to close driver server
I0729 20:08:46.643937  755318 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:46.644269  755318 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:46.644343  755318 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 20:08:46.644374  755318 main.go:141] libmachine: Making call to close driver server
I0729 20:08:46.644389  755318 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:46.646402  755318 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:46.646431  755318 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:46.646455  755318 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483711 image ls --format yaml --alsologtostderr:
- id: 5cc3abe5717dbf8e0db6fc2e890c3f82fe95e2ce5c5d7320f98a5b71c767d42f
repoDigests:
- docker.io/kindest/kindnetd@sha256:3b93f681916ee780a9941d48cb20622486c08af54f8d87d801412bcca0832115
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "87165492"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
- docker.io/library/nginx@sha256:baa881b012a49e3c2cd6ab9d80f9fcd2962a98af8ede947d0ef930a427b28afc
repoTags:
- docker.io/library/nginx:latest
size: "191750286"
- id: 4bdcda058114ec939cc4ebddfec6d80345d10cda6f804d7deb83fe889f9bb971
repoDigests:
- localhost/minikube-local-cache-test@sha256:0e45fd7d57e2b0aedcefb08a106a73e812484fae0b14c7a7dd34a501ed795b6c
repoTags:
- localhost/minikube-local-cache-test:functional-483711
size: "3330"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:2e6b9c67730f1f1dce4c6e16d60135e00608728567f537e8ff70c244756cbb62
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "150779692"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8c178447597867a03bbcdf0d1ce43fc8f6807ead2321bd1ec0e845a2f12dad80
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "85953945"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
- registry.k8s.io/kube-apiserver@sha256:a3a6c80030a6e720734ae3291448388f70b6f1d463f103e4f06f358f8a170315
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117609954"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
- registry.k8s.io/kube-controller-manager@sha256:fa179d147c6bacddd1586f6d12ff79a844e951c7b159fdcb92cdf56f3033d91e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "112198984"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1738178fb116d10e7cde2cfc3671f5dfdad518d773677af740483f2dfe674266
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "63051080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:functional-483711
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
- registry.k8s.io/coredns/coredns@sha256:2169b3b96af988cf69d7dd69efbcc59433eb027320eb185c6110e0850b997870
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "61245718"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483711 image ls --format yaml --alsologtostderr:
I0729 20:08:42.451406  755195 out.go:291] Setting OutFile to fd 1 ...
I0729 20:08:42.451642  755195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:42.451651  755195 out.go:304] Setting ErrFile to fd 2...
I0729 20:08:42.451655  755195 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:42.451825  755195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
I0729 20:08:42.452415  755195 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:42.452516  755195 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:42.452997  755195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:42.453046  755195 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:42.471693  755195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46279
I0729 20:08:42.472234  755195 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:42.472922  755195 main.go:141] libmachine: Using API Version  1
I0729 20:08:42.472955  755195 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:42.473408  755195 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:42.473722  755195 main.go:141] libmachine: (functional-483711) Calling .GetState
I0729 20:08:42.475723  755195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:42.475758  755195 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:42.490882  755195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33275
I0729 20:08:42.491328  755195 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:42.491959  755195 main.go:141] libmachine: Using API Version  1
I0729 20:08:42.491999  755195 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:42.492318  755195 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:42.492503  755195 main.go:141] libmachine: (functional-483711) Calling .DriverName
I0729 20:08:42.492697  755195 ssh_runner.go:195] Run: systemctl --version
I0729 20:08:42.492727  755195 main.go:141] libmachine: (functional-483711) Calling .GetSSHHostname
I0729 20:08:42.496146  755195 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:42.496613  755195 main.go:141] libmachine: (functional-483711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:5b:f7", ip: ""} in network mk-functional-483711: {Iface:virbr1 ExpiryTime:2024-07-29 21:06:04 +0000 UTC Type:0 Mac:52:54:00:74:5b:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-483711 Clientid:01:52:54:00:74:5b:f7}
I0729 20:08:42.496648  755195 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:42.496811  755195 main.go:141] libmachine: (functional-483711) Calling .GetSSHPort
I0729 20:08:42.496994  755195 main.go:141] libmachine: (functional-483711) Calling .GetSSHKeyPath
I0729 20:08:42.497208  755195 main.go:141] libmachine: (functional-483711) Calling .GetSSHUsername
I0729 20:08:42.497387  755195 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/functional-483711/id_rsa Username:docker}
I0729 20:08:42.598638  755195 ssh_runner.go:195] Run: sudo crictl images --output json
I0729 20:08:42.648121  755195 main.go:141] libmachine: Making call to close driver server
I0729 20:08:42.648138  755195 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:42.648485  755195 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:42.648536  755195 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 20:08:42.648570  755195 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:42.648582  755195 main.go:141] libmachine: Making call to close driver server
I0729 20:08:42.648595  755195 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:42.648850  755195 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:42.648880  755195 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:42.648889  755195 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh pgrep buildkitd: exit status 1 (179.972268ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image build -t localhost/my-image:functional-483711 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 image build -t localhost/my-image:functional-483711 testdata/build --alsologtostderr: (2.9616705s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-483711 image build -t localhost/my-image:functional-483711 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 134f31c160f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-483711
--> 60ac31af3ae
Successfully tagged localhost/my-image:functional-483711
60ac31af3ae1d92b6bb65011481d2f4e745f4d0c1eb4270031ecec85e0583c9a
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-483711 image build -t localhost/my-image:functional-483711 testdata/build --alsologtostderr:
I0729 20:08:42.876262  755253 out.go:291] Setting OutFile to fd 1 ...
I0729 20:08:42.876501  755253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:42.876510  755253 out.go:304] Setting ErrFile to fd 2...
I0729 20:08:42.876514  755253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 20:08:42.876678  755253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
I0729 20:08:42.877186  755253 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:42.877700  755253 config.go:182] Loaded profile config "functional-483711": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
I0729 20:08:42.878079  755253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:42.878119  755253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:42.893627  755253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
I0729 20:08:42.894140  755253 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:42.894767  755253 main.go:141] libmachine: Using API Version  1
I0729 20:08:42.894796  755253 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:42.895137  755253 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:42.895364  755253 main.go:141] libmachine: (functional-483711) Calling .GetState
I0729 20:08:42.897324  755253 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0729 20:08:42.897371  755253 main.go:141] libmachine: Launching plugin server for driver kvm2
I0729 20:08:42.912732  755253 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37599
I0729 20:08:42.913172  755253 main.go:141] libmachine: () Calling .GetVersion
I0729 20:08:42.913668  755253 main.go:141] libmachine: Using API Version  1
I0729 20:08:42.913689  755253 main.go:141] libmachine: () Calling .SetConfigRaw
I0729 20:08:42.913996  755253 main.go:141] libmachine: () Calling .GetMachineName
I0729 20:08:42.914194  755253 main.go:141] libmachine: (functional-483711) Calling .DriverName
I0729 20:08:42.914408  755253 ssh_runner.go:195] Run: systemctl --version
I0729 20:08:42.914432  755253 main.go:141] libmachine: (functional-483711) Calling .GetSSHHostname
I0729 20:08:42.917022  755253 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:42.917370  755253 main.go:141] libmachine: (functional-483711) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:74:5b:f7", ip: ""} in network mk-functional-483711: {Iface:virbr1 ExpiryTime:2024-07-29 21:06:04 +0000 UTC Type:0 Mac:52:54:00:74:5b:f7 Iaid: IPaddr:192.168.39.12 Prefix:24 Hostname:functional-483711 Clientid:01:52:54:00:74:5b:f7}
I0729 20:08:42.917408  755253 main.go:141] libmachine: (functional-483711) DBG | domain functional-483711 has defined IP address 192.168.39.12 and MAC address 52:54:00:74:5b:f7 in network mk-functional-483711
I0729 20:08:42.917496  755253 main.go:141] libmachine: (functional-483711) Calling .GetSSHPort
I0729 20:08:42.917665  755253 main.go:141] libmachine: (functional-483711) Calling .GetSSHKeyPath
I0729 20:08:42.917809  755253 main.go:141] libmachine: (functional-483711) Calling .GetSSHUsername
I0729 20:08:42.917967  755253 sshutil.go:53] new ssh client: &{IP:192.168.39.12 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/functional-483711/id_rsa Username:docker}
I0729 20:08:42.999172  755253 build_images.go:161] Building image from path: /tmp/build.1777265833.tar
I0729 20:08:42.999251  755253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 20:08:43.010551  755253 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1777265833.tar
I0729 20:08:43.014440  755253 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1777265833.tar: stat -c "%s %y" /var/lib/minikube/build/build.1777265833.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1777265833.tar': No such file or directory
I0729 20:08:43.014466  755253 ssh_runner.go:362] scp /tmp/build.1777265833.tar --> /var/lib/minikube/build/build.1777265833.tar (3072 bytes)
I0729 20:08:43.039112  755253 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1777265833
I0729 20:08:43.050048  755253 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1777265833 -xf /var/lib/minikube/build/build.1777265833.tar
I0729 20:08:43.060701  755253 crio.go:315] Building image: /var/lib/minikube/build/build.1777265833
I0729 20:08:43.060772  755253 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-483711 /var/lib/minikube/build/build.1777265833 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0729 20:08:45.760385  755253 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-483711 /var/lib/minikube/build/build.1777265833 --cgroup-manager=cgroupfs: (2.699580471s)
I0729 20:08:45.760491  755253 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1777265833
I0729 20:08:45.780231  755253 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1777265833.tar
I0729 20:08:45.792011  755253 build_images.go:217] Built localhost/my-image:functional-483711 from /tmp/build.1777265833.tar
I0729 20:08:45.792065  755253 build_images.go:133] succeeded building to: functional-483711
I0729 20:08:45.792072  755253 build_images.go:134] failed building to: 
I0729 20:08:45.792104  755253 main.go:141] libmachine: Making call to close driver server
I0729 20:08:45.792121  755253 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:45.792397  755253 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:45.792417  755253 main.go:141] libmachine: Making call to close connection to plugin binary
I0729 20:08:45.792420  755253 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:45.792425  755253 main.go:141] libmachine: Making call to close driver server
I0729 20:08:45.792436  755253 main.go:141] libmachine: (functional-483711) Calling .Close
I0729 20:08:45.792671  755253 main.go:141] libmachine: Successfully made call to close driver server
I0729 20:08:45.792683  755253 main.go:141] libmachine: (functional-483711) DBG | Closing plugin on server side
I0729 20:08:45.792699  755253 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.726169391s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-483711
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image load --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 image load --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr: (2.041731269s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image load --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-483711
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image load --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image save docker.io/kicbase/echo-server:functional-483711 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image rm docker.io/kicbase/echo-server:functional-483711 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-483711
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 image save --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-483711 image save --daemon docker.io/kicbase/echo-server:functional-483711 --alsologtostderr: (3.13656917s)
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-483711
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdany-port3383048488/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722283707167959795" to /tmp/TestFunctionalparallelMountCmdany-port3383048488/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722283707167959795" to /tmp/TestFunctionalparallelMountCmdany-port3383048488/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722283707167959795" to /tmp/TestFunctionalparallelMountCmdany-port3383048488/001/test-1722283707167959795
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (250.479014ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 20:08 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 20:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 20:08 test-1722283707167959795
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh cat /mount-9p/test-1722283707167959795
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-483711 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fd766d82-753d-4719-bb45-cdb915aafbc9] Pending
helpers_test.go:344: "busybox-mount" [fd766d82-753d-4719-bb45-cdb915aafbc9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fd766d82-753d-4719-bb45-cdb915aafbc9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fd766d82-753d-4719-bb45-cdb915aafbc9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.00480305s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-483711 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdany-port3383048488/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service list -o json
functional_test.go:1490: Took "839.878903ms" to run "out/minikube-linux-amd64 -p functional-483711 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdspecific-port586213324/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.64592ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdspecific-port586213324/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "sudo umount -f /mount-9p": exit status 1 (204.680873ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-483711 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdspecific-port586213324/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.12:32571
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.12:32571
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T" /mount1: exit status 1 (211.485589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-483711 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-483711 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-483711 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2230754894/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.23s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-483711
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-483711
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-483711
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (227.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344518 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-344518 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m46.574778406s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (227.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-344518 -- rollout status deployment/busybox: (4.869436384s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-22rcc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-fp24v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-xn8rr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-22rcc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-fp24v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-xn8rr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-22rcc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-fp24v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-xn8rr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-22rcc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-22rcc -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-fp24v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-fp24v -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-xn8rr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-344518 -- exec busybox-fc5497c4f-xn8rr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-344518 -v=7 --alsologtostderr
E0729 20:13:14.089729  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.095824  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.106097  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.126406  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.166752  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.247101  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.407538  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:14.728146  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:15.368651  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:16.649153  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:19.211011  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:24.331937  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:34.572150  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:13:55.052932  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-344518 -v=7 --alsologtostderr: (55.33253783s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-344518 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp testdata/cp-test.txt ha-344518:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518:/home/docker/cp-test.txt ha-344518-m02:/home/docker/cp-test_ha-344518_ha-344518-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test_ha-344518_ha-344518-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518:/home/docker/cp-test.txt ha-344518-m03:/home/docker/cp-test_ha-344518_ha-344518-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test_ha-344518_ha-344518-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518:/home/docker/cp-test.txt ha-344518-m04:/home/docker/cp-test_ha-344518_ha-344518-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test_ha-344518_ha-344518-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp testdata/cp-test.txt ha-344518-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m02:/home/docker/cp-test.txt ha-344518:/home/docker/cp-test_ha-344518-m02_ha-344518.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test_ha-344518-m02_ha-344518.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m02:/home/docker/cp-test.txt ha-344518-m03:/home/docker/cp-test_ha-344518-m02_ha-344518-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test_ha-344518-m02_ha-344518-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m02:/home/docker/cp-test.txt ha-344518-m04:/home/docker/cp-test_ha-344518-m02_ha-344518-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test_ha-344518-m02_ha-344518-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp testdata/cp-test.txt ha-344518-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt ha-344518:/home/docker/cp-test_ha-344518-m03_ha-344518.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test_ha-344518-m03_ha-344518.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt ha-344518-m02:/home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test_ha-344518-m03_ha-344518-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m03:/home/docker/cp-test.txt ha-344518-m04:/home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test_ha-344518-m03_ha-344518-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp testdata/cp-test.txt ha-344518-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1656315222/001/cp-test_ha-344518-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt ha-344518:/home/docker/cp-test_ha-344518-m04_ha-344518.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518 "sudo cat /home/docker/cp-test_ha-344518-m04_ha-344518.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt ha-344518-m02:/home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m02 "sudo cat /home/docker/cp-test_ha-344518-m04_ha-344518-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 cp ha-344518-m04:/home/docker/cp-test.txt ha-344518-m03:/home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 ssh -n ha-344518-m03 "sudo cat /home/docker/cp-test_ha-344518-m04_ha-344518-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.469920332s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-344518 node delete m03 -v=7 --alsologtostderr: (16.408747952s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (346.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-344518 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 20:28:14.089785  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
E0729 20:29:37.137191  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-344518 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m45.535586223s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (346.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-344518 --control-plane -v=7 --alsologtostderr
E0729 20:33:14.090106  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-344518 --control-plane -v=7 --alsologtostderr: (1m15.021173473s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-344518 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (57.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-909508 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-909508 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (57.132208927s)
--- PASS: TestJSONOutput/start/Command (57.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-909508 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-909508 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-909508 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-909508 --output=json --user=testUser: (6.586745842s)
--- PASS: TestJSONOutput/stop/Command (6.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-411171 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-411171 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.692272ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"716834ad-6390-48f9-974c-a4c16e01dea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-411171] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"911c02b2-009f-443b-821e-4d73e7b94a7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19344"}}
	{"specversion":"1.0","id":"a69ae7e9-41b5-4b2c-b5e4-bab4c9bf170c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b697833b-02c1-442d-8ba1-48bbc3c7d896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig"}}
	{"specversion":"1.0","id":"f19a3528-c5ab-4fe7-9db0-cc14d679f429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube"}}
	{"specversion":"1.0","id":"71f8f250-d58d-4afd-854e-0065e8544e90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"375f03de-8aa0-4c92-ba71-8a2e8122844a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bdbb68b5-44da-4e8b-a8be-7d19ca8e8e0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-411171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-411171
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (83.6s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-421085 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-421085 --driver=kvm2  --container-runtime=crio: (41.288619355s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-424712 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-424712 --driver=kvm2  --container-runtime=crio: (39.667006652s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-421085
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-424712
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-424712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-424712
helpers_test.go:175: Cleaning up "first-421085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-421085
--- PASS: TestMinikubeProfile (83.60s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-398693 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-398693 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.798038132s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-398693 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-398693 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (23.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418093 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418093 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (22.573573859s)
--- PASS: TestMountStart/serial/StartWithMountSecond (23.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-398693 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-418093
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-418093: (1.272600581s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.42s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-418093
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-418093: (22.41675887s)
--- PASS: TestMountStart/serial/RestartStopped (23.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-418093 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151054 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0729 20:38:14.090257  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151054 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.257804847s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.66s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-151054 -- rollout status deployment/busybox: (3.706793671s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-rvsbf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-xzlcl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-rvsbf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-xzlcl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-rvsbf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-xzlcl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-rvsbf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-rvsbf -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-xzlcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-151054 -- exec busybox-fc5497c4f-xzlcl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-151054 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-151054 -v 3 --alsologtostderr: (46.783022168s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.35s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-151054 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp testdata/cp-test.txt multinode-151054:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054:/home/docker/cp-test.txt multinode-151054-m02:/home/docker/cp-test_multinode-151054_multinode-151054-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test_multinode-151054_multinode-151054-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054:/home/docker/cp-test.txt multinode-151054-m03:/home/docker/cp-test_multinode-151054_multinode-151054-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test_multinode-151054_multinode-151054-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp testdata/cp-test.txt multinode-151054-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt multinode-151054:/home/docker/cp-test_multinode-151054-m02_multinode-151054.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test_multinode-151054-m02_multinode-151054.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m02:/home/docker/cp-test.txt multinode-151054-m03:/home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test_multinode-151054-m02_multinode-151054-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp testdata/cp-test.txt multinode-151054-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2361961589/001/cp-test_multinode-151054-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt multinode-151054:/home/docker/cp-test_multinode-151054-m03_multinode-151054.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054 "sudo cat /home/docker/cp-test_multinode-151054-m03_multinode-151054.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 cp multinode-151054-m03:/home/docker/cp-test.txt multinode-151054-m02:/home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 ssh -n multinode-151054-m02 "sudo cat /home/docker/cp-test_multinode-151054-m03_multinode-151054-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-151054 node stop m03: (1.438587975s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151054 status: exit status 7 (418.057304ms)

                                                
                                                
-- stdout --
	multinode-151054
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151054-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151054-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr: exit status 7 (414.826159ms)

                                                
                                                
-- stdout --
	multinode-151054
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-151054-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-151054-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 20:40:21.564236  773258 out.go:291] Setting OutFile to fd 1 ...
	I0729 20:40:21.564346  773258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:40:21.564353  773258 out.go:304] Setting ErrFile to fd 2...
	I0729 20:40:21.564357  773258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 20:40:21.564521  773258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19344-733808/.minikube/bin
	I0729 20:40:21.564669  773258 out.go:298] Setting JSON to false
	I0729 20:40:21.564695  773258 mustload.go:65] Loading cluster: multinode-151054
	I0729 20:40:21.564828  773258 notify.go:220] Checking for updates...
	I0729 20:40:21.565059  773258 config.go:182] Loaded profile config "multinode-151054": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.30.3
	I0729 20:40:21.565074  773258 status.go:255] checking status of multinode-151054 ...
	I0729 20:40:21.565434  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.565495  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.583933  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0729 20:40:21.584437  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.585205  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.585243  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.585606  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.585858  773258 main.go:141] libmachine: (multinode-151054) Calling .GetState
	I0729 20:40:21.587559  773258 status.go:330] multinode-151054 host status = "Running" (err=<nil>)
	I0729 20:40:21.587584  773258 host.go:66] Checking if "multinode-151054" exists ...
	I0729 20:40:21.587855  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.587890  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.603467  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35721
	I0729 20:40:21.603896  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.604401  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.604425  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.604709  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.604888  773258 main.go:141] libmachine: (multinode-151054) Calling .GetIP
	I0729 20:40:21.607873  773258 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:40:21.608356  773258 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:40:21.608379  773258 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:40:21.608647  773258 host.go:66] Checking if "multinode-151054" exists ...
	I0729 20:40:21.609043  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.609101  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.624939  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33507
	I0729 20:40:21.625386  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.625864  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.625882  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.626211  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.626407  773258 main.go:141] libmachine: (multinode-151054) Calling .DriverName
	I0729 20:40:21.626597  773258 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:40:21.626623  773258 main.go:141] libmachine: (multinode-151054) Calling .GetSSHHostname
	I0729 20:40:21.629485  773258 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:40:21.629936  773258 main.go:141] libmachine: (multinode-151054) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f6:c7:7a", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:37:34 +0000 UTC Type:0 Mac:52:54:00:f6:c7:7a Iaid: IPaddr:192.168.39.229 Prefix:24 Hostname:multinode-151054 Clientid:01:52:54:00:f6:c7:7a}
	I0729 20:40:21.629966  773258 main.go:141] libmachine: (multinode-151054) DBG | domain multinode-151054 has defined IP address 192.168.39.229 and MAC address 52:54:00:f6:c7:7a in network mk-multinode-151054
	I0729 20:40:21.630131  773258 main.go:141] libmachine: (multinode-151054) Calling .GetSSHPort
	I0729 20:40:21.630325  773258 main.go:141] libmachine: (multinode-151054) Calling .GetSSHKeyPath
	I0729 20:40:21.630617  773258 main.go:141] libmachine: (multinode-151054) Calling .GetSSHUsername
	I0729 20:40:21.630758  773258 sshutil.go:53] new ssh client: &{IP:192.168.39.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054/id_rsa Username:docker}
	I0729 20:40:21.711032  773258 ssh_runner.go:195] Run: systemctl --version
	I0729 20:40:21.716740  773258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:40:21.730065  773258 kubeconfig.go:125] found "multinode-151054" server: "https://192.168.39.229:8443"
	I0729 20:40:21.730097  773258 api_server.go:166] Checking apiserver status ...
	I0729 20:40:21.730142  773258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 20:40:21.743021  773258 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup
	W0729 20:40:21.752478  773258 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1133/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0729 20:40:21.752526  773258 ssh_runner.go:195] Run: ls
	I0729 20:40:21.756573  773258 api_server.go:253] Checking apiserver healthz at https://192.168.39.229:8443/healthz ...
	I0729 20:40:21.760924  773258 api_server.go:279] https://192.168.39.229:8443/healthz returned 200:
	ok
	I0729 20:40:21.760957  773258 status.go:422] multinode-151054 apiserver status = Running (err=<nil>)
	I0729 20:40:21.760967  773258 status.go:257] multinode-151054 status: &{Name:multinode-151054 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:40:21.761005  773258 status.go:255] checking status of multinode-151054-m02 ...
	I0729 20:40:21.761287  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.761330  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.776881  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41447
	I0729 20:40:21.777314  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.777814  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.777837  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.778148  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.778443  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetState
	I0729 20:40:21.780286  773258 status.go:330] multinode-151054-m02 host status = "Running" (err=<nil>)
	I0729 20:40:21.780306  773258 host.go:66] Checking if "multinode-151054-m02" exists ...
	I0729 20:40:21.780730  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.780779  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.796093  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40323
	I0729 20:40:21.796533  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.797062  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.797085  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.797398  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.797617  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetIP
	I0729 20:40:21.800165  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | domain multinode-151054-m02 has defined MAC address 52:54:00:15:b7:b8 in network mk-multinode-151054
	I0729 20:40:21.800518  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b7:b8", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:38:42 +0000 UTC Type:0 Mac:52:54:00:15:b7:b8 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-151054-m02 Clientid:01:52:54:00:15:b7:b8}
	I0729 20:40:21.800540  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | domain multinode-151054-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:15:b7:b8 in network mk-multinode-151054
	I0729 20:40:21.800694  773258 host.go:66] Checking if "multinode-151054-m02" exists ...
	I0729 20:40:21.800984  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.801038  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.816371  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34125
	I0729 20:40:21.816891  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.817324  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.817345  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.817669  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.817854  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .DriverName
	I0729 20:40:21.818073  773258 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 20:40:21.818098  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetSSHHostname
	I0729 20:40:21.820801  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | domain multinode-151054-m02 has defined MAC address 52:54:00:15:b7:b8 in network mk-multinode-151054
	I0729 20:40:21.821177  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:b7:b8", ip: ""} in network mk-multinode-151054: {Iface:virbr1 ExpiryTime:2024-07-29 21:38:42 +0000 UTC Type:0 Mac:52:54:00:15:b7:b8 Iaid: IPaddr:192.168.39.98 Prefix:24 Hostname:multinode-151054-m02 Clientid:01:52:54:00:15:b7:b8}
	I0729 20:40:21.821197  773258 main.go:141] libmachine: (multinode-151054-m02) DBG | domain multinode-151054-m02 has defined IP address 192.168.39.98 and MAC address 52:54:00:15:b7:b8 in network mk-multinode-151054
	I0729 20:40:21.821371  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetSSHPort
	I0729 20:40:21.821583  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetSSHKeyPath
	I0729 20:40:21.821727  773258 main.go:141] libmachine: (multinode-151054-m02) Calling .GetSSHUsername
	I0729 20:40:21.821859  773258 sshutil.go:53] new ssh client: &{IP:192.168.39.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19344-733808/.minikube/machines/multinode-151054-m02/id_rsa Username:docker}
	I0729 20:40:21.902716  773258 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 20:40:21.916149  773258 status.go:257] multinode-151054-m02 status: &{Name:multinode-151054-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 20:40:21.916189  773258 status.go:255] checking status of multinode-151054-m03 ...
	I0729 20:40:21.916551  773258 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0729 20:40:21.916597  773258 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0729 20:40:21.932568  773258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40371
	I0729 20:40:21.933079  773258 main.go:141] libmachine: () Calling .GetVersion
	I0729 20:40:21.933534  773258 main.go:141] libmachine: Using API Version  1
	I0729 20:40:21.933557  773258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0729 20:40:21.934030  773258 main.go:141] libmachine: () Calling .GetMachineName
	I0729 20:40:21.934372  773258 main.go:141] libmachine: (multinode-151054-m03) Calling .GetState
	I0729 20:40:21.936012  773258 status.go:330] multinode-151054-m03 host status = "Stopped" (err=<nil>)
	I0729 20:40:21.936026  773258 status.go:343] host is not running, skipping remaining checks
	I0729 20:40:21.936051  773258 status.go:257] multinode-151054-m03 status: &{Name:multinode-151054-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (38.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-151054 node start m03 -v=7 --alsologtostderr: (38.25457841s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (38.87s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-151054 node delete m03: (1.593929496s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (174.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151054 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151054 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m54.035436488s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-151054 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (174.56s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-151054
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151054-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-151054-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (65.20996ms)

                                                
                                                
-- stdout --
	* [multinode-151054-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-151054-m02' is duplicated with machine name 'multinode-151054-m02' in profile 'multinode-151054'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-151054-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-151054-m03 --driver=kvm2  --container-runtime=crio: (43.112846039s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-151054
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-151054: exit status 80 (216.806205ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-151054 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-151054-m03 already exists in multinode-151054-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-151054-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-702985 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-702985 --memory=2048 --driver=kvm2  --container-runtime=crio: (38.999581793s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702985 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-702985 -n scheduled-stop-702985
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702985 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702985 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702985 -n scheduled-stop-702985
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-702985
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-702985 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0729 20:58:14.090845  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-702985
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-702985: exit status 7 (65.550231ms)

                                                
                                                
-- stdout --
	scheduled-stop-702985
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702985 -n scheduled-stop-702985
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-702985 -n scheduled-stop-702985: exit status 7 (65.335545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-702985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-702985
--- PASS: TestScheduledStopUnix (110.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (208.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3861800550 start -p running-upgrade-160077 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3861800550 start -p running-upgrade-160077 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m6.446503218s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-160077 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-160077 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m18.235777314s)
helpers_test.go:175: Cleaning up "running-upgrade-160077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-160077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-160077: (1.444835761s)
--- PASS: TestRunningBinaryUpgrade (208.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (77.183812ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-148160] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19344
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19344-733808/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19344-733808/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148160 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148160 --driver=kvm2  --container-runtime=crio: (1m38.701739058s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-148160 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (98.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (55.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --driver=kvm2  --container-runtime=crio: (54.690824069s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-148160 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-148160 status -o json: exit status 2 (231.285743ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-148160","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-148160
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (55.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (47.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148160 --no-kubernetes --driver=kvm2  --container-runtime=crio: (47.649207636s)
--- PASS: TestNoKubernetes/serial/Start (47.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-148160 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-148160 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.238491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.578000501s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-148160
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-148160: (1.286917664s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (37.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-148160 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-148160 --driver=kvm2  --container-runtime=crio: (37.226566168s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (37.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-148160 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-148160 "sudo systemctl is-active --quiet service kubelet": exit status 1 (203.715549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.20s)

                                                
                                    
x
+
TestPause/serial/Start (80.51s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-913034 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0729 21:02:57.138712  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-913034 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m20.509823391s)
--- PASS: TestPause/serial/Start (80.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (98.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2331947864 start -p stopped-upgrade-252364 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0729 21:03:14.089907  740962 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19344-733808/.minikube/profiles/functional-483711/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2331947864 start -p stopped-upgrade-252364 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (52.271555537s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2331947864 -p stopped-upgrade-252364 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2331947864 -p stopped-upgrade-252364 stop: (2.180067853s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-252364 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-252364 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.651062662s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (98.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-252364
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    

Test skip (35/215)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.30.3/cached-images 0
15 TestDownloadOnly/v1.30.3/binaries 0
16 TestDownloadOnly/v1.30.3/kubectl 0
23 TestDownloadOnly/v1.31.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.31.0-beta.0/binaries 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
105 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
106 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
108 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
111 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
112 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
167 TestImageBuild 0
194 TestKicCustomNetwork 0
195 TestKicExistingNetwork 0
196 TestKicCustomSubnet 0
197 TestKicStaticIP 0
229 TestChangeNoneUser 0
232 TestScheduledStopWindows 0
234 TestSkaffold 0
236 TestInsufficientStorage 0
240 TestMissingContainerUpgrade 0
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard